CHAPTER 7

image

Pluggable Databases

Pluggable databases, affectionately called PDBs for the remainder of this chapter, are a major new concept in the Oracle database and the outstanding new feature to be aware of in Oracle 12.1. From the author’s point of view the introduction of Pluggable Databases into the Oracle kernel is a true major new feature of a magnitude which we have not seen for a long time. In fact, the novelty of this new feature is comparable to the introduction of Automatic Storage Management in 10g Release 1, which radically changed the way we work with Oracle and storage. In comparison, the release of 11g Releases 1 and 2 were moderate improvements of the existing previous release—with new features in certain areas—but nowhere near as novel as Pluggable Databases. The following chapter gives you insight into managing Pluggable Databases in Oracle 12.1, including the new database types available. I hope that by the end of it you will be as excited as I am.

It is important to note that Oracle 12.1 is backward compatible with 11g Release 2. So if you are tight on budget or time it is comforting to know that you do not have to embrace the new feature, but then again you would forfeit many of the benefits of Oracle 12c. Maybe the following sections are enough of a teaser to start experimenting with Pluggable Databases.

Note that the use of Pluggable Databases requires you to be correctly licensed for the feature.

The consolidated hosting platform before Oracle 12.1

Pluggable databases address many of the problems administrators and designers faced with consolidation projects using the previous Oracle releases. Most consolidation projects have used a shared-schema approach where individual suitable applications have been rebased from their current environments, and often into a (RAC) database. The methods used to perform the rebasing included Transportable Tablespaces, and ingenious use of the export/import Data Pump utilities. Some users chose to use replication technologies such as Oracle Streams, Golden Gate, or other third-party products.

The onboarding process for new applications into the hosted environment usually involves some kind of assistance from the DBA team or the service owner. The amount of support that can be made available to users of the hosting service depends greatly on funding. You have to be clear about what the goalposts of the hosting service are: your solution is either fully automated and therefore users get minimal support from the operational DBAs. On the other extreme, a dedicated onboarding team exists taking the user through all stages of the application move, including “intensive care” during the first couple of weeks. Ideally there is a dedicated contact person in the onboarding team answering any questions. In many situations this ideal may be out of reach.

The hosting service using schema-based consolidation as it existed before Oracle 12.1 had to live with a number of limitations. Some of these restrictions are of a technical nature; others are procedural. The restrictions include:

  • No two identical users can co-exist in a pre-12.1 database.
  • Namespaces must be separate.
  • There should not be any grant to the PUBLIC role.
  • Use of global database links is ruled out due to security constraints.
  • Users with elevated privileges (the DBA role, all privileges with “any” in their name) can read or even modify data in schemas they should not have access to unless special access restrictions are in place.
  • Auditing is made difficult as there is no concept of logical partitioning by application—all events are recorded on the database level.
  • Restore operations are complicated by the requirement to get approvals from the applications not impacted by the data corruption.

There are of course some more, the ones mentioned above are the most obvious ones. In the past, some users tried to get around the namespace restrictions by using dedicated databases per consolidated application. Although that solved quite a few of the above-mentioned problems, this comes at the cost of a greatly amplified overhead caused by additional background processes and memory requirements. For this reason the database-per-application approach does not scale well.

Another common way of consolidating databases was to create virtual environments with exactly one database in it. This approach is very heavily used in x86-64-based virtualization, but can also be found in the form of Solaris Zones or IBM LPARs. Although there are benefits in terms of isolation of workloads, you should not forget to account for the operating system patching if a full operating system stack is to be maintained. Granted, there is less overhead when using Operating System virtualization such as Solaris Zones, since each virtual copy of the operating system is based on the global operating environment. Figure 7-1 compares the three most popular consolidation approaches before the release of Oracle 12c.

9781430244288_Fig07-01.jpg

Figure 7-1. Common consolidation methods pre Oracle 12.1

Ideally one would combine the lower memory and lower process footprint of the multi-tenancy RAC database and the application isolation of the single database per host. Well you may have guessed it—that’s what the Pluggable Database will offer: horizontal virtualization, but from inside the database. PDBs offer the best of all worlds, and allow you to achieve a much higher consolidation density if your hardware supports it.

Pluggable Databases to the rescue

With Oracle 12.1 a fundamental change has taken place in the architecture. Luckily, as with most technology introduced in the database, you still have the choice not to use it straight away. For those who do, and I hope I can encourage you to do so with this book, you will have the option of using a new database type: the Container Database or CDB.

The main benefit of using the CDB is to allow for horizontal virtualization within the database. A Container Database, as the name implies, contains at least one, and up to 253 user-Pluggable Databases. The Pluggable Database is the consolidation target. From a user’s point of view a Pluggable Database (PDB) appears just like any normal database before Oracle 12.1. As such, a PDB is a collection of schemas, tablespaces, and other metadata objects. The Container Database itself is merely a container and does not store user data.

The Container Database serves two purposes. First, it is the container for all these Pluggable Databases you are about to create. Second, it contains what is referred to as the CDB’s root. Named CDB$ROOT in the documentation it is the home for information common to the CDB and the PDBs. The CDB has a further component named the seed PDB or PDB$SEED. The seed database is a minimal PDB that you can use like the starter database when creating a database using the Database Configuration Assistant dbca. It cannot be opened or modified.

The Container Database instance type exists alongside the database type as we know it from pre-12.1 days. In the remaining chapters I will refer to this former database type as a “non-CDB” to make a clear distinction between the two. Interestingly the Container Database is not limited to single instance deployments, it can also be created as a Real Application Cluster database. The deployment does not have any impact on the Pluggable Databases for which it provides a home. Consider Figure 7-2, outlining the new database “container database” instance.

9781430244288_Fig07-02.jpg

Figure 7-2. A comparison of CDB vs. non-CDB

The non-CDB, or any other Oracle database before 12.1 for that matter did not have any facility to separate applications in the same way as a container database has. As you know, the super-users in all releases prior to 12.1 had full visibility of all the data stored in the database. The only way around this was the use of Data Vault or comparable solutions. Although they certainly helped achieve the goal of preserving privacy in realms, they had to be configured and maintenance operations affecting the whole database were somewhat complex to arrange and perform. It is the author’s firm belief that complexity is the enemy of successful database operations these days. If something can be simplified, it should be.

The CDB however is a container for multiple Pluggable Databases, without much of the previously required manual configuration. The all-powerful user accounts however can still connect to a PDB quite easily and view data. Without additional configuration however powerful users defined locally in a PDB cannot query information in another PDB. Sensitive environments should still employ additional layers of protection within the Pluggable Database.

Each Pluggable Database is completely self-contained from a user data point-of-view. All user data that makes up a database can safely be stored in a Pluggable Database without the risk of compromising compatibility. The Pluggable Database can easily be moved from one Container Database to another. The PDBs and CDB$ROOT share the data dictionary. Not all dictionary information is duplicated in each PDB—that would be a waste of space. Instead a PDB facilitates pointers to the dictionary of its container database’s root. Whether or not an object in the database is shared can be seen in some views, including DBA_OBJECTS. A new column, called SHARING, indicates if the object is a metadata link to an object in the root, or an object link, or none of the two.

This greatly increases flexibility, and is a main differentiator to other consolidation approaches where each application is hosted in a dedicated database—which needs patching of the data dictionary. With a CDB you can potentially patch hundreds of PDBs for the price of one. This chapter will provide you with an overview of how to unplug/plug a PDB from one Container Database to another.

In order to create a Container Database, the DBA has to make a conscious decision at the creation time—the Database Configuration Assistant provides an option to create the new database as a CDB. It is important to note at this point that a non-CDB cannot be converted into a CDB. Instead, it has to be converted to a PDB which in turn can be plugged into an existing CDB. We will see that this is necessary during migrations, which are detailed in Chapter 12.

The CDB creation required an enhancement to the “create database statement,” but the scripts to be run to create a custom database have changed in comparison with the non-CDB case. See further down in the section “Creating a CDB using scripts” for more information on how to create a CDB from the command line.

The documentation has been updated in many places for the new Pluggable Databases Feature. Additional information about PDBs and the new structure can be found in the following books of the official documentation set:

  • Concepts Guide
  • Administrator’s Guide
  • SQL Language Reference
  • Security Guide
  • Backup and Recovery User’s Guide
  • Data Guard Concepts and Administration
  • And there are references in many others more

I personally like to keep bookmarks for these in my browser under a special “Oracle 12.1” category. I have been involved in a number of discussions about consolidation projects, and all of these mentioned at one point or another about the difficulties using Oracle before 12.1 to find a consensus between the engineers and the operational DBAs. Great technology is certainly of a lot of academic value, but if the level one and level two support simply cannot support from day one, it is a lot of time and money wasted. It once again comes down to skill, and sadly skill often weighs less than cost per hour. With Oracle 12.1 a good balance can be reached between new technology and operational maintenance procedures.

Guide to rest of chapter

The rest of the chapter will detail Pluggable Databases and all the relevant concepts around them for running a Container Database on your host. It will not try to expand on backup and recovery for such databases. There is a separate chapter covering that topic. It also does not attempt to introduce disaster recovery setup: this is done in a separate chapter as well.

The new features about PDBs are quite rich, and to keep this chapter from being too inflated, the above-mentioned topics have been selected as good candidates for their own chapters.

Implementation details for Pluggable Databases

One of the design goals for Pluggable Databases was to allow for backward compatibility. In other words, anything that was possible with a database pre-12.1 or a non-CDB should be possible with a PDB.

It will soon become apparent that this promise has been fulfilled. One of the restrictions found during the testing was the inability to connect to a PDB using operating system authentication. One could argue either way if this is a serious problem or not, but alternatives are available to fill the void. Local O/S authentication was a neat way to execute jobs on the database server without having to undergo password maintenance on the account.

EXTERNAL AUTHENTICATION

External authentication has been available with Oracle for a long time. Its main function is to allow users to attach to the database to run batch processes or sqlldr instances without having to worry about hard-coding passwords in shell scripts. This is problematic from a security point of view, mainly because it bypasses some settings made in profiles.

On the other hand, not having to worry about password maintenance, password complexity, and storing passwords is an attractive feature. Consider the following example:

A user created using the “create user username identified externally” clause will be able to log in to Oracle after logging in to their respective UNIX account username on the same host. The syntax is simple, a “sqlplus /” will suffice. The same is true for sqlldr and other utilities running on the same host. Since there is no password you do not need to worry about it either. As you can imagine, there is no involvement of the network layer at all. This is no longer possible in 12c. As you will see later, you need to connect to a PDB using a database service.

Bottom line: O/S authentication is a thing of the past and instead of using this type of authentication users should use the secure external password store instead.

Physical structure of a CDB and PDB

The structure of the Container Database is different from the non-CDB or any previous release. Figure 7-3 illustrates the main new concepts in a CDB.

9781430244288_Fig07-03.jpg

Figure 7-3. An overview of the Container Database

To begin with, the main differentiator is the fact that you can see at least one Pluggable Database in addition to the root already. The database you see to the left is a special one, called the seed database. If you decided to create PDBs during the CDB creation then there can be additional PDBs, up to 253 in total for Oracle 12.1.

When thinking about the PDB-CDB relationship one could use the analogy to Solaris zones. Every modern Solaris installation will consist of a global zone. The global zone is what resembles the root-container in the Container Database. The recommendation is not to use the global zone to store applications, and that’s exactly the same with the CDB. Storing data in the root zone is as bad as creating applications in the global zone, as it defeats the main purpose of the self-contained unit. You would not store user data in the SYSTEM tablespace either, would you?

Only one listener is required on the database host for single instance Oracle deployments, which will manage connections between applications, ad-hoc query users and generally speaking all other database server to client traffic. When a PDB is opened, a service is started by default which is named after the PDB. Additional services can easily be created to allow for fine-grained charge back models.

Containers

After initial creation of the Container Database, i.e., the top-level container for all components to be stored, you find yourself in a new territory ready to be explored. Since the concept of Pluggable Databases has not existed in Oracle, even from a conceptual point of view it takes a little time to familiarize yourself with the new world.

At first, you need to understand the concept of a container. The container is your namespace, i.e., the scope of the statements you issue. The introduction of the container is not a problem for application owners and developers as you will see shortly, but more of an issue to the database administrator. You need to know which container you are currently connected to. The simple rule to remember: if you used the command “sqlplus / as sysdba” you are in the root container, or CDB$ROOT.

Oracle has once again increased the number of keys available in the USERENV realm to allow you to query which container you are connected to. This can be quite useful in maintenance scripts to work out that you are not accidentally connected to the wrong PDB! You can query the container number as well as the container name you are connected to as shown in this example:

SQL> select sys_context('userenv','con_name') from dual;
 
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT
 
SQL> select sys_context('userenv','con_id') from dual
 
SYS_CONTEXT('USERENV','CON_ID')
--------------------------------------------------------------------------------
1

If you were a bit lazy, then you could write a special piece of code in your login.sql to show the container you are connected to:

col usr new_value usr
col con new_value con
define con=started
define usr=not
set termout off
select sys_context('userenv','con_name') as con, user usr from dual;
set termout on
set sqlprompt '&&usr.@&&con.> '

The root container has a new class of views, called CDB-views. These contain information about the root and all its PDBs. This is useful to get a full overview of the database, but it will not list information for PDBs which are mounted and not open. Additionally, the user executing the query needs to have the privilege to view the PDB. Consider the following example from the Container Database CDB1, executed as SYS in CDB$ROOT:

SQL> select con_id,open_mode, name from v$pdbs;
 
    CON_ID OPEN_MODE  NAME
---------- ---------- ------------------------------
         2 READ ONLY  PDB$SEED
         3 MOUNTED    SWINGBENCH1
         4 MOUNTED    SWINGBENCH2

As you can see the only user-PDB, swingbench1, is mounted (closed). A PDB can either be mounted or open. Querying the container data object CDB_DATA_FILES will not show data files belonging to the PDB:

SQL> select count(*) from cdb_data_files where con_id=3;
 
  COUNT(*)
----------
         0

Only after the PDB is opened will you see the relevant information:

SQL> select count(*) from cdb_data_files where con_id=3;
 
  COUNT(*)
----------
         3

Don’t get confused with this when assessing the space usage, and verify which user you are logged in! If you don’t get the expected results it is well worth checking the container (available in the sys_context) and your database connection.

When viewing information about containers you should bear in mind that the first 3 container IDs are always fixed:

  • Container 0 indicates that the information in the view applies to the whole CDB as such.
  • Container ID 1 is reserved for the root: CDB$ROOT
  • Container ID 2 is reserved for the seed PDB: PDB$SEED

For queries against the CDB-views you could add a “where” clause to skip the first three containers by requesting a CON_ID > 2.

Common physical structures

The Container Database shares a few components with all PDBs. Some of these are optionally shared, others must be shared. The CDB does not differ much from a non-CDB, except that there are additional files for each Pluggable Database. Each Container Database uniquely owns the following tablespaces:

  • SYSTEM
  • SYSAUX
  • UNDOTBS1 (multiple of those in RAC)
  • TEMP
  • USERS (if the CDB has been created by dbca)

Don’t be misled by the existence of the USERS tablespace in the CDB: its only purpose is to prevent data from accidentally being stored in the SYSTEM tablespace. Remember that user data is stored in the PDBs, not the CDB! Pluggable Databases have their own SYSTEM, SYSAUX, and optionally TEMP tablespaces plus all the user data files. Interestingly the TEMP tablespace is called TEMP in the PDB as well as the CDB, the file names indicate that they are actually different. Additional tablespaces can be added to the PDB just as in any release before Oracle 12.1.

Online redo logs as well as standby redo logs belong to the CDB and are shared with all PDBs. This makes for interesting scenarios when log mining is required. It is suggested you test your favorite log mining based replication technology with a high volume of redo generation before promoting it to production!

The control file is neither CDB nor PDB specific, although it obviously has to keep information pertaining to the PDBs currently plugged into the CDB.

The diagnostic destination used for the Automatic Diagnostic Repository (ADR) is not a physical structure as such, but it is useful to know that the logging facility is based on the CDB. Individual PDBs do not have their own diagnostic destination.

Implications for the Data Dictionary

The data dictionary as we know it needed tweaking, and can now take containers into account. This can be confusing at times, especially when you are used to typing commands quickly and without looking. Before exploring the new data dictionary further, let’s step back a little bit and review how the dictionary was used before Pluggable Databases.

When you created a new database using dbca or the scripted method, you were presented with a pristine data dictionary. Pristine in this context means that the only rows stored in the dictionary were Oracle’s own metadata. This situation is shown in Figure 7-4

9781430244288_Fig07-04.jpg

Figure 7-4. Pristine data dictionary after database creation

But since the purpose of every database is to store user data, your dictionary (and the underlying tables in the SYS schema) started to become intertwined with user metadata. This applied to data structures within the user table spaces, as well as source code. There is nothing wrong with this by the way, it’s the price you pay for using the database! Consider Figure 7-5, which shows the dictionary after it has been populated with user data, such as tables, columns, indexes, and audit information.

9781430244288_Fig07-05.jpg

Figure 7-5. User data and Oracle metadata are now intertwined

There is an implication to this method of storing metadata, as you will probably recall from your operational experience. Your stored code for example could be invalidated by modifications to the data dictionary as part of patching. It therefore was necessary to recompile dependent code in the database if an Oracle package has been modified by a patch.

The stroke of genius applied for Pluggable Databases is quite simple in theory: instead of allowing user data to “pollute” the Oracle namespace in the CDB$ROOT Database, there is a strict separation of what is stored in the PDB vs. the CDB. Instead of duplicating each row of the data dictionary on each PDB, only the delta is stored within the PDB. The remaining information exists as a pointer from the PDB to the data in the CDB. This not only conserves a lot of space since the PDB’s dictionary does not contain redundant information, but it also makes the PDB a lot more independent. This is an important aspect when it comes to unplugging/re-plugging a PDB from one CDB to another.

You already guessed it: the introduction of containers made it necessary to add metadata about them. When researching this book I often found myself selecting from the DBA_% views—those used to contain everything before Oracle 12.1. If you did not find what you were looking for in a DBA% view it probably did not exist. In the current release, such views no longer return all that you might expect. The confusion is quite easily lifted if you consider the following rules:

  • The DBA% views list all the information in the current container, but not sub-containers. In other words, you can for example view all the common users in the CDB$ROOT, but it won’t show you the users in the seed PDB or any user-defined PDB.
  • The same rule applies for the USER% and ALL% views—their scope is the current container.
  • Caveat: the new class of CDB% views does not list information about mounted PDBs. To view dictionary information about a PDB it has to be opened read only or read write.
  • The V$-type views list information about PDBs as well when you query them from the CDB$ROOT

The following two queries return the same result, but have been executed in different containers. Both queries achieve the same goal: they list all the users for container 3.

First, here is a query executed from the CDB:

SYS@CDB$ROOT> select username,authentication_type,common
  2  from cdb_users
  3  where con_id = 3
  4  order by username;
 
USERNAME                       AUTHENTI COM
------------------------------ -------- ---
ANONYMOUS                      PASSWORD YES
APEX_040200                    PASSWORD YES
APEX_PUBLIC_USER               PASSWORD YES
APPQOSSYS                      PASSWORD YES
[...]
 
45 rows selected.
 
SQL>

Next is a query executed as SYSTEM while logged on to the PDB:

SYSTEM@PDB1> select username,authentication_type,common
  2  from dba_users
  3  order by username;
 
USERNAME                       AUTHENTI COM
------------------------------ -------- ---
ANONYMOUS                      PASSWORD YES
APEX_040200                    PASSWORD YES
APEX_PUBLIC_USER               PASSWORD YES
[...]
 
45 rows selected.
 
SQL>

The security model obviously had to change as well. You can read more about that subject later in this chapter in section “Users and roles”.

The Pluggable Database

After you have created the initial Container Database it is time to consider your options on how to proceed. You will have at least one PDB in your CDB: the seed database. Other Pluggable Databases could have been created as well, but in most cases you may want to keep the CDB lean during creation.

PDBs can be created using ASM and OMF or on the file system. The remainder of the chapter will try to accommodate both of these but sometimes it is necessary to focus on just one. From a manageability point of view using ASM with OMF is the easiest way to administer a CDB with PDBs.

The seed PDB

Every Container Database comes with exactly one seed database, and that fact cannot be changed. The seed database has a fixed container ID of 2, and is opened read only by default. Every user-PDB has started its life a clone of the seed PDB, directly or indirectly. The seed contains the bare minimum Oracle deems necessary to get a PDB cloned and started. For that reason the seed is lightweight and comes with only two tablespaces: SYSTEM and SYSAUX. It also has its own TEMP tablespace.

The seed database should not be modified at all, every attempt to open it will result in” ORA-65017: seed pluggable database may not be dropped or altered”. This has an interesting implication: if you were hoping that you could modify the seed to match your organization’s database standards, then this is bad news. However, you can have your custom “seed” database from which you clone your user-PDBs as you will see in the section “Managing Pluggable Databases”.

The User-Pluggable Database

A User PDB in this book indicates a Pluggable Database that has been cloned from either another user-PDB or the seed database, but is open for business. In other words, the user-PDB is the application object, and responsible for user connections and storing application data. Whenever you read about a PDB in the sections that follow you probably read about a user PDB even if it is not explicitly mentioned.

As you will see shortly, Pluggable Databases are very flexible and can easily be transported between Container Databases.

AN EXAMPLE FOR A BIG WIN

Consider the case of an independent software vendor. As with any support organization working with Oracle databases you need to keep past versions of your application available to reproduce problems and create test cases. Internally, you may have databases named after the current, future, and past releases. Depending on your support policy you may even have to support these databases for different Oracle releases.

When it came to patching—an application of a Patch Set Update for example—a lengthy process starts. First the respective Oracle home is patched out of place, and then all required databases. Patching can become a full Saturday’s work to run catupgrade.sql on so many databases.

In this situation, a Pluggable Database would be a great match. Patching the binary home out of place is still required, but doesn’t require much application downtime. The team could simply have created a new CDB with the upgraded dictionary, unplugged the PDBs and plugged them into the new CDB.

A Pluggable Database is not automatically opened in Oracle 12.1. This has to do with how the PDBs are treated in RAC, where you are not obliged to open a PDB on every instance. This is vaguely similar to RAC One Node, where you have a multi-node cluster database, which is only ever open on one cluster node. You can save memory and a process footprint by deciding to selectively open PDBs on instances in the cluster. In addition this approach allows for a gentle migration from a single instance to a RAC database, without having to worry about cross-instance messaging and associated potential cluster waits.

Creating a Container Database

Before you can create any Pluggable Database it is necessary to create the container first. This is a very important decision: it is impossible to convert a non-CDB to a CDB database.

image Note  You can always transform a non-Container Database into a Pluggable Database and add it to an existing Container Database. However, you cannot convert a non-Container Database into a Container Database.

There are three popular ways to create a Container Database in Oracle 12.1: using SQL scripts, by invoking the Database Configuration Assistant in silent mode, and finally via dbca’s graphical user interface. The scripted method has changed profoundly from the previous way of creating databases. To fully understand the process of a CDB creation it is useful to start the Database Configuration Assistant in GUI mode and let it create the scripts for you. You can see examples of these scripts in the “Creating a CDB using scripts” section below. If you prefer the GUI approach throughout, then there is nothing stopping you from creating your CDB using that tool from end-to-end. An example for the process is shown in the section “Using Database Configuration Assistant to create a CDB” below.

Just like in previous releases you have an option to either run dbca in silent or GUI mode. Silent mode offers you a flexibility, allowing you to run dbca from scripts or in environments where for security reasons you cannot have X11-forwarding or even an X-Server on the machine. Combined with a standard-compliant database template dbca is the tool of choice for rapid standards compliant database creation.

Memory considerations for CDBs

You have seen in Chapter 3 that x86-64 servers are able to take more and more memory and make good use of it. Even a modest dual-socket E5-2600-v2 series Xeon system could potentially address 1.5 TB of memory depending on server model. Memory is readily available most of the time, and in consolidated servers memory should not be scarce: that would be saving at the wrong end.

With that being said, as with any consolidation platform you can expect a lot of consolidated systems—PDBS or otherwise—to be very memory hungry. Interestingly there is no measurable overhead when opening a mounted PDB, however a large number of processes will obviously be a strain on the number of connections the system can serve. To measure the impact of sessions connecting to the Oracle server, a little experiment has been conducted. Consider the PDB “swingbench1”, which has extensively been used to benchmark the system performance. How much overhead does a new connection add to the operating system without large pages enabled? A small 8GB SGA will already suffer a huge overhead from a larger number of sessions doing very little actual work when large pages are not configured. Consider the following example of an idle system. With the database started, the PageTables entry in the /proc/meminfo file for Oracle Linux 6.4 will show about 46MB used:

[oracle@server1 ∼]$ grep -i paget /proc/meminfo
PageTables:        46380 kB

Using a little script to generate some XML on the fly will be used as benchmark.

[oracle@server1 ∼]$ cat xml.sql
declare
  a xmltype;
begin
  a := xmltype('<a>toto</a>'),
  dbms_lock.sleep(20);
end;
/
 
exit;

The test consists of 700 different executions of the same script in the current shell:

[oracle@server1 ∼]$ for i in $(seq 1 700) ; do
>  sqlplus /@swingbench1 @xml &
> done

image Note  This example uses the secure external password store to connect to a TNS alias without having to provide a password. You can read more about that connectivity method in Chapter 10.

How much overhead does this generate? A lot! Periodically checking the PageTables entries from my system showed that with all 700 sessions connected, the kernel allocated around 750 MB:

[oracle@server1 ∼]$ grep -i paget /proc/meminfo
PageTables:       750680kB

Going back to the initial state of an otherwise idle system, you will find that the number of page tables allocated is a lot lower and nearly back to the initial 46 MB. Let’s review the same with large pages enabled. The maximum value registered with large pages enabled is shown here:

PageTables:       25228 kb

If you needed more persuasion to allocate large pages for your database, then hopefully this should give you another good reason to do so.

Deciding about storing files

The storage options available to the database administrator have not changed with Oracle 12.1. As with previous releases you have the choice of a supported file system or Automatic Storage Management (ASM). With ASM comes a strong encouragement to use Oracle Managed Files. In this respect it does not matter if the storage is local on the host, or provided via a SAN or NFS. Most readers will be familiar with the use of ASM and Oracle Managed Files, if you would like a refresher please refer to Chapter 2 for more information.

If you are not using ASM, you need to think about the location of the seed PDBs file location. Remember from the section “Common physical structures” that the seed is always present in a Container Database and has a SYSTEM, SYSAUX, and a TEMP tablespace. Since the seed database cannot be modified after it has been created some planning is necessary. The seed’s data file locations can be configured in the following places:

  • Using the “seed file_name_convert” clause in the “create database” command.
  • Setting the “db_create_file_dest” initialization parameter to enable Oracle Managed Files.
  • Using the “pdb_file_name_convert” initialization parameter.

The easiest way to deal with the seed’s and all user PDBs’ data files is to use Oracle Managed files. Using it you do not need to worry about any conversion parameters, but this comes at the cost of having slightly less control, especially when using ASM. You give away some command over file names when using OMF. In addition to that ASM is not a traditional file system as such where you can use ls, tar, gzip, and other tools you can use on a different supported file system. Additionally, the data files used by any PDB—seed and user—will include the PDB’s globally unique identifier-GUID-in the file name. The system tablespace has the following fully qualified name in ASM:

+DATA/CDB1/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.259.826623817

If you are not using OMF, then your approach will be different. You will shortly see that the Oracle by default creates a sub-directory for the seed database:

[oracle@server1 ∼]$ find /u01/oradata –type d
/u01/oradata/CDB1
/u02/oradata/CDB1/pdbseed

This directory structure works well for the initial seed database. To make best use of the Pluggable Databases Feature it can be useful to define the storage location of user-PDBs on SAN attached storage. This way, unplug and plug operations from one database server to another can be much simpler. All it requires is a remount of that mount point to another host, followed by plugging the database in. In this scenario the seed database would still go to /u01/oradata/${ORACLE_SID}/pdbseed, but the user-PDBs could go to /u02/oracle/PDBs/<pdbname>/ for example. Since the seed is never used for anything other than cloning a PDB, its read/write performance can be ignored most of the time if you want to place it on slow storage.

The most convenient way to achieve the goal of ease-of-access of your user-PDBs when not using OMF is to keep the seed file name convert clause in the create database statement as it is, and set the initialization parameter “pdb_file_name_convert” to the appropriate value. Please refer to section “The create pluggable database statement” further down in this chapter for more information about the creation of PDBs and defining their storage location on a file system.

Using Database Configuration Assistant to create a CDB

This is the most basic way to get started with the new database release. Unlike some other Oracle tools which have only seen minor cosmetic change, the dbca looks quite different. Begin by executing $ORACLE_HOME/bin/dbca in order to launch the graphical user interface. The initial page allows you to perform a variety of tasks, including creating and dropping databases. It has a new menu item to manage pluggable databases too. For now, select the “create database option” to get to the next screen.

The utility offers two different ways to create a new database: typical and advanced. The main decision to be taken is shown in Figure 7-6.

9781430244288_Fig07-06.jpg

Figure 7-6. Database Creation

The typical options are the ones you see on the screen—they can help you to get started quickly. When opting for the advanced options, you are guided through a few more wizard steps, which are explained in Table 7-1.

Table 7-1. Wizard Steps and Options

Step

Action to take for database creation

1) Database Operation

Create a database

2) Creation Mode

Typical or Advanced-select advanced

3) Database Template

Select which template to use for the new database. This has primarily an effect on the database initialization parameters. In the default database software installation you get templates including data files and a custom option. The existence of data files means that dbca will perform an RMAN restore of the data files associated with the template whereas the custom option will create the database based on scripts which takes longer. You can create your own database templates and use them for a standard compliant database creation, possibly the best way to deploy databases at a larger scale.

4) Database Identification

Select a global database name and an Oracle SID. Please remember that the Oracle SID has a maximum length on most platforms. You should also select “Create As Container Database here”. You can either create the CDB empty or with a number of PDBs.

5) Management Options

Specify if you want to register the new CDB with Enterprise Manager Cloud Control. If you check that box then you need to have an agent already installed on the host. Alternatively you can configure Enterprise Manager Express, or none of the above. EM Express replaces the DB Console that existed between Oracle 10.1 and 11.2.

6) Database Credentials

On this screen you enter the passwords for the SYS and SYSTEM accounts. They can either be separate or identical, the separate option offering more security of course.

7) Network Configuration

On this screen you define which listener the database should be registered with. If you do not see a listener although it is started you should check if a listener.ora file exists.

You can also create a new listener in this screen if you like.

8) Storage Location

It is possible to use different storage options for database files and the Fast Recovery Area. In most cases you’d keep to one storage option, either file system or ASM.

If you are using a file system location you have the further option to use Oracle Managed Files. You can optionally multiplex redo logs and control files if you choose to use OMF.

Finally you can decide to enable archivelog mode straight away.

9) Database Options

In this screen you can configure Database Vault and Oracle Label Security. Both of these are out of scope of the chapter.

10) Initialization Parameters

You can set the memory parameters, block size, character sets and connection mode on this part of the wizard. It has not changed from the 11.2 wizard page and is fairly self-explanatory.

11) Creation Options

Again a screen which has not changed from a previous release, here you can choose to create the database and/or to create the database scripts. It is a good idea to create the scripts as well to become more familiar with the database creation process in Oracle 12c.

12) Pre Requisite Checks

The dbca utility validates your choices against the system to ensure that all pre-requisites such as free space to create the database are met. Normally the screen should not be visible: if dbca does not find anything to complain about it moves immediately on to the next screen.

13) Summary

On this page you are presented with the HTML-styled document summarizing the options you chose. It lists the database components you selected and other important information such as Oracle SID and whether or not the database to be created is a CDB.

14) Process

The process page informs you how far into the database creation you have progressed. At the end of the database creation you are presented with account management information in a pop-up window, which you confirm and then exit dbca. The new database is ready for use.

After you have created the database it will be ready to accept user connections. If the database was installed on a system with ASM, it will automatically be registered with Oracle Restart.

Creating a CDB using scripts

The method of creating an Oracle container database has changed with the introduction of the Container Database. It would also appear that the process of creating a CDB from scripts takes longer, but this might be a subjective effect. The below example scripts have been created using the Database Creation Assistant and should be modified for your environment. Some parts of the scripts have already been adjusted. A common location has been chosen for all data files, which could be a problem for production deployments on contended storage use but serves as a good example regarding the location of the PDB files. By default dbca creates the scripts in

$ORACLE_BASE/admin/${ORACLE_SID}/scripts/

The main script to invoke when creating a database is a shell script named after the database. After creating the supporting directory structure the CDB creation begins with the create database command. While using previously mentioned common location for all data files, the following directories are created by the script, among others:

mkdir -p /u01/oradata/CDB1
mkdir -p /u01/oradata/CDB1/pdbseed

The shell script ${ORACLE_SID}.sh sets some environment variables for Oracle’s perl implementation delivered as part of the RDBMS installation. Before continuing you should heed the advice in the script and add an entry for the new database in the oratab file.

You would expect the CDB$ROOT’s files to be stored in /u01/oradata/CDB1, and the seed’s files plus in /u01/oradata/CDB1/pdbseed. After all administrative tasks have completed, this slightly modified “create database” statement is executed:

CREATE DATABASE "CDB3"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
  DATAFILE '/u01/oradata/CDB3/system01.dbf'
  SIZE 700M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
  SYSAUX DATAFILE '/u01/oradata/CDB3/sysaux01.dbf'
  SIZE 550M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP
  TEMPFILE '/u01/oradata/CDB3/temp01.dbf'
  SIZE 20M REUSE AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1"
  DATAFILE  '/u01/oradata/CDB3/undotbs01.dbf'
  SIZE 200M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE
  GROUP 1 ('/u01/oradata/CDB3/redo01.log') SIZE 50M,
  GROUP 2 ('/u01/oradata/CDB3/redo02.log') SIZE 50M,
  GROUP 3 ('/u01/oradata/CDB3/redo03.log') SIZE 50M
USER SYS IDENTIFIED BY "&&sysPassword"
USER SYSTEM IDENTIFIED BY "&&systemPassword"
enable pluggable database
seed file_name_convert=(
 '/u01/oradata/CDB3/system01.dbf','/u01/oradata/CDB3/pdbseed/system01.dbf',
 '/u01/oradata/CDB3/sysaux01.dbf','/u01/oradata/CDB3/pdbseed/sysaux01.dbf',
 '/u01/oradata/CDB3/temp01.dbf','/u01/oradata/CDB3/pdbseed/temp01.dbf',
 '/u01/oradata/CDB3/undotbs01.dbf','/u01/oradata/CDB3/pdbseed/undotbs01.dbf'
);

The dbca-generated script creates a much longer seed file name convert statement—one for each data file in the CDB. Please bear in mind that the above command is just an example—please review it and modify values appropriately.

TESTING THE FILE_NAME_CONVERT PARAMETERS

The file_name_convert parameters are well known to Oracle DBAs who have cloned databases using RMAN in the past. The parameter seems to be equivalent to the replace( ) function in SQL. This makes for easy testing. Connect to your database as an administrative user and issue the following query:

SQL> select file_name,
     2  replace(file_name,
     3          'string to be matched', 'string to replace match with')
     4      from dba_data_files;

The resulting output of the replace function will tell you the location of the data file with the conversion parameter applied.

SQL> select replace(file_name,'/u01/','/u02/') from dba_data_files;

REPLACE(FILE_NAME,'/u01/','/u02/')
-----------------------------------------------------
/u02/oradata/CDB1/datafiles/system.257.780854341
/u02/oradata/CDB1/datafiles/sysaux.256.780854265
/u02/oradata/CDB1/datafiles/users.258.780854405
/u02/oradata/CDB1/datafiles/undotbs1.259.780854407

This technique applies for many Oracle processes, including RMAN database cloning.

The SQL script then executes the scripts necessary to generate the data dictionary, and all the database components you need. What makes these scripts different from previous releases and the non-CDB is the invocation via the catcon.pl script. Consider the following snippet used to create the Oracle Database Catalog Views and Oracle Database Packages and Types (CreateDBCatalog.sql):

...
alter session set "_oracle_script"=true;
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open;
host perl /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/admin/catcon.plshift-enter.jpg
  -n 1 -l /u01/app/oracle/admin/CDB3/scripts -b catalog shift-enter.jpg
 /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/admin/catalog.sql;
host perl /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/admin/catcon.plshift-enter.jpg
 -n 1 -l /u01/app/oracle/admin/CDB3/scripts -b catblock shift-enter.jpg
 /u01/app/oracle/product/12.1.0.1/dbhome_1/rdbms/admin/catblock.sql;
...

image Note  The scripts creating the dictionary are documented in the Oracle Database Reference 12.1 manual, in Appendix B on “SQL Scripts.”

As you can see the scripts we used to run while directly connected to the database have changed and are channeled via the catcon.pl Perl script. This is the reason for setting the perl environment variables in the top-level shell script. The purpose of the script is to execute one or more scripts in a CDB, or within one or more PDBs. The parameters specified in the snippet shown above indicate that:

  • The directory /u01/app/oracle/admin/CDB3/scripts should be used for log files
  • The log file base name is indicated as the argument to the option “-b” (catalog for example)
  • Followed by the file to be executed

The catcon.pl script is often involved when dictionary scripts are executed in a CDB. It is potentially very useful and thankfully Oracle documented it in the Administrator’s Guide in Chapter 40 in section “Running Oracle Supplied SQL scripts in a CDB”.

The creation process takes a little while depending on the options you need to be present in the Container Database. As with most things in life, less is more, which is especially true with Oracle databases. Not only does the execution of each additional dictionary script take more time, it also opens the door for exploits. If your application does not make use of the InterMedia or Spatial options for example, don’t add the options to the database. In releases before Oracle 12.1 one could also argue that any component in the data dictionary that wasn’t needed used up precious time during upgrades. But as you have seen in a number of places in this book: that argument does not carry as much weight anymore. Instead of patching the whole database, you may be able to simply unplug your PDB and plug it into an already patched CDB. Bear in mind though that the options need to be compatible with PDBs you intend to plug into your CDB. In some cases you still will have to recompile the PDB-specific parts of the dictionary.

image Note  You can use dbca to create templates of your “golden” image CDB database. Instead of creating the database manually every time which is time consuming and therefore not very efficient you could create the database from the template instead.

Exploring the new CDB

Once the new Container Database is created, it is time to connect to it and explore the new Oracle release. Have a look around; use the CDB% views to query the data dictionary. Create a few PDBs from the seed and see how quickly you can provision a new environment. When querying the data dictionary and you don’t see information about a PDB you might have to check the open mode and open the PDB read write or read only. If the PDB is not open in those states the CDB% views won’t detect any information about them. Also remember the scope of your commands: when connected to the root (CDB$ROOT) with DBA privileges you should be able to see all the information pertaining to the attached and accessible containers. If you are connected to a PDB, then you will only see the information pertaining to those.

You might also want to familiarize yourself with the physical structure of your CDB. You can check the v$parameter view to see where your control files are located, use cdb_data_files and cdb_temp_files to check where your data and temp files are—all based or sorted on the con_id for each container. You will also see that there is no CDB-view for online redo logs. Online redo logs are not container specific and belong to the Container Database as such. The column CON_ID in V$LOG shows them as member of CON_ID 0. Once you are feeling more comfortable with your database it is time to move on to Pluggable Databases.

Managing Pluggable Databases

Just before I started writing this section I saw a very good discussion on my favorite mailing list: [email protected] about preferred methods used by DBAs created their database [email protected] about preferred methods used by DBAs created their database. Especially given the fact that the execution of the catalog.sql and catproc.sql plus any ancillary scripts takes a lot of time! I couldn’t tell them there and then, but what made me rejoice was that I could have told them: the creation of a new “database” won’t take a lot of time at all! That of course depends on your understanding of database. If you consider a database as an entity to give to users in which they can run their applications then a PDB fits the description one hundred percent.

You may recall from the discussion about containers earlier in the chapter that there is always a seed database in a CDB, called PDB$SEED. It can be cloned in almost no time to spin off a new database for users to start working with their new data store. If you don’t like to use the rather Spartan way the seed-PDB is set up then there are other ways to address the need to use a PDB as the source for a clone operation. It is even possible to clone a PDB over a database link, although that method requires careful testing due to bandwidth and latency considerations. Before exploring each of these methods in more detail it is necessary to describe an extension to the SQL language: the “pluggable” keyword in the create database statement.

Creating a Pluggable Database

The SQL reference has been extended once again to accommodate the new features available with the new database release. The create/alter database statement has been extended with the “pluggable” attribute. Whenever the “pluggable” keyword is found in any of these examples, you immediately know that the command is aimed at one of the containers. The SQL language reference however has a dedicated section about the “create pluggable database statement”; it is not an extension of the “create database statement”.

It almost goes without saying that the user executing any of these statements needs elevated privileges. A new system privilege has been introduced, which unsurprisingly has been called “create pluggable database”. Before you can create a new PDB, you need to consider your options:

  • Create a new PDB from the seed
  • Create a new PDB from another local PDB
  • Create a new PDB from a remote PDB
  • Plug-in a PDB into a CDB
  • Plug a non-CDB into a CDB as a new PDB

The following sections detail all these steps. Regardless of which source you chose to clone the PDB, you have a number of clauses available to fine-tune your new Pluggable Database.

Creating a PDB from the seed database

Creating a PDB from the seed database is the easiest way to get started with Pluggable Databases. By default, the new PDB will consist of a SYSTEM, SYSAUX, UNDO, and TEMP tablespace. It will be accessible by all common users (see section “Users and roles” later in the chapter for an introduction into the new security concept). For now let it suffice that a common user can connect to the root and any container using the same username and password. The other user class is represented by the local users, which you already assumed, can only connect to specific PDBs.

When you create a PDB from the seed, you need to specify a mandatory admin user. This is the only mandatory clause you need to specify if you are using Oracle Managed Files or ASM for that matter. Therefore, the basic statement to create a PDB is as simple as this:

SYS@CDB$ROOT> create pluggable database pdb2
  2  admin user pdbdba identified by password;
 
Pluggable database created.

A few seconds later the statement completes and you have your first working PDB! The operation is documented in the alert log of the CDB:

create pluggable database pdb2 admin user pdbdba identified by *
2013-09-25 10:35:23.445000 +01:00
****************************************************************
Pluggable Database PDB2 with pdb id - 4 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#22 to file$(old file#5)
Adding new file#23 to file$(old file#7)
2013-09-25 10:35:24.814000 +01:00
Successfully created internal service pdb2 at open
ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local
****************************************************************
Post plug operations are now complete.
Pluggable database PDB2 with pdb id - 4 is now marked as NEW.
****************************************************************
Completed: create pluggable database pdb2 admin user pdbdba identified by *

image Note  Although the admin user clause is mandatory to create a user-PDB, no one forces you to actually make use of the account. It can be easily locked as part of the database build process after the PDB has been opened.

Further options you can make use of include the definition of a default tablespace in the PDB. Omitting the clause will lead to the new PDB’s SYSTEM tablespace to become the default tablespace. Luckily Oracle allows you to create that new tablespace if it does not yet exist, which is the case when you are cloning your PDB from the seed database. The syntax is based on the well-known “create tablespace” command, reusing the same data file name spec and extent management clause. Below is an example for creating a PDB from the seed with a new default tablespace USERS, residing on OMF:

SYS@CDB$ROOT> create pluggable database pdb3
  2  admin user pdbdba identified by secret
  3  default tablespace pdb3_default_tbs datafile size 10G
  4  /
 
Pluggable Database created.

If you are implementing a consolidated platform solution, then you certainly appreciate the ability to limit the size of the PDB. This way a user could order a 100 GiB database, and your operations team does not have to limit the size of individual data files. All the settings are implemented at deployment time. The pdb_storage_clause can be used to either set the maximum size to unlimited. Alternatively the clause allows you to limit the amount of shared temp space (for sorting on disk) as well as to set the size of the persistent storage. If the pdb_storage_clause is omitted there is no limit to the size of the PDB, other than the physical storage of course. For example, you could limit the data file growth to 100 G as shown in this example:

SYS@CDB$ROOT> create pluggable database pdb4
  2  admin user pdbdba identified by secret
  3  storage (maxsize 100G)
  4  default tablespace pdb4_default_tbs datafile size 10G
  5  autoextend on next 10G;
 
Pluggable database created.

If it should turn out that during the lifetime of the database more space is needed, the database limit can be modified.

The new PDB can of course be placed into a different location. The file_name_convert clause allows you to specify where the new PDB should be located to. Refer back earlier in this chapter to review the basics of the file name conversion in Oracle. Consider for example this CDB which resides on a file system and not on ASM as the ones in the previous examples:

SQL> select con_id, file_name from cdb_data_files
  2  order by con_id
  3  /
 
    CON_ID FILE_NAME
---------- --------------------------------------------------
         1 /u01/oradata/CDB2/users01.dbf
         1 /u01/oradata/CDB2/undotbs01.dbf
         1 /u01/oradata/CDB2/sysaux01.dbf
         1 /u01/oradata/CDB2/system01.dbf
         2 /u01/oradata/CDB2/pdbseed/system01.dbf
         2 /u01/oradata/CDB2/pdbseed/sysaux01.dbf

This implies the CDB’s files are all under /u01/oradata/CDB2/, with the seed database’s files in another subdirectory “pdbseed”. Since these files are not overly sensitive to I/O requirements, you can store them on lower tier storage. The other PDBs however are a different story and have different I/O requirements from the seed. Assume that the /u01/oradata/CDB2/pdbs/ mount point is the default mount point for PDBs. You could then clone your seed as follows:

SQL> create pluggable database fspdb1 admin user pdbdba identified by password
  2  file_name_convert=(
  3    '/u01/oradata/CDB2/pdbseed/','/u01/oradata/CDB2/pdbs/fspdb1/'),
 
Pluggable database created.

The file locations are reflected accordingly:

SQL> select con_id, file_name
  2  from cdb_data_files
  3  where con_id = 4;
 
    CON_ID FILE_NAME
---------- --------------------------------------------------
         4 /u01/oradata/CDB2/pdbs/fspdb1/system01.dbf
         4 /u01/oradata/CDB2/pdbs/fspdb1/sysaux01.dbf

The final clause to the “create pluggable database statement” allows you to grant roles to the admin user, but this will be discussed later in the chapter in the section “Users and Roles”.

After the PDB has been cloned, it is in mount mode. To access it you need to open it. See further down in the chapter for more information about opening and closing PDBs. The PDB also needs backing up.

Cloning from another PDB

The ability to clone a PDB from another PDB is a very nice feature. As part of it you have a process that handles the copy process as well, which should make the creation of a spin-off PDB a very simple process. As with the creation of a user-PDB you have several options as to where to place the files and to restrict the storage usage of the new PDB.

Cloning a local PDB is the first case to be demonstrated. To that extent the PDB to be cloned is plugged into the current CDB. The source you want to clone from has to be opened in read-only mode. This is very similar to the requirement when exporting metadata for transporting tablespaces. Closing an active PDB is a disruptive process and therefore needs to be carefully coordinated with the business users. To initiate the cloning process, shut down the source PDB in preparation to making it read-only:

SQL> alter pluggable database pdb1 close immediate;
 
Pluggable database altered.

Once the PDB has been closed, you can reopen it in read-only mode:

SQL> alter pluggable database pdb1 open read only;
 
Pluggable database altered.

The cloning process in its most basic form requires no arguments when OMF is in use, especially not an admin user clause. Consider the following example on a system using Oracle Managed Files:

SQL> create pluggable database pdb5 from pdb1;
 
Pluggable database created.

On a regular file system without OMF the command requires the file name conversion parameter. This means that all files belonging to a PDB are always contained under a single directory or mount point. You cannot avoid the file name conversion, and bundle up all the PDBs under a single directory, such as /u01/oradata/pdbs/pdbs. From a deployment perspective you should therefore create mount points for different storage tiers, with sub-directories for each PDB.

The below does not make use of this schema, but rather has a generic mount point “pdbs”, under which all the PDBs are created. The user-PDB “fspdb1” has already been created and is now in read-only mode waiting to be cloned:

SQL> create pluggable database fspdb2 from fspdb1
  2  file_name_convert = ('/u01/oradata/pdbs/fspdb1','/u01/oradata/pdbs/fspdb2')
  3  /
 
Pluggable database created.

When cloning a PDB you cannot assign a default tablespace in the same command, and you cannot modify or set the roles for the PDB administrator the same way you would as shown in the previous section. You can however make use of the storage clause, and limit the temp and permanent storage used by the PDB. Changes to the physical structure of the PDB need to be completed after the PDB has been created and opened.

Once the PDB is created it needs to be backed up.

Cloning from a remote PDB

Cloning a user-PDB from a remote source requires a very similar syntax as cloning from a local source. In essence, all that is required is a database link between the destination and source. The operation itself is a pull from remote via the database link. There is similar advice to be given concerning the operation compared to RMAN’s ability to duplicate an active database: don’t overload your network! If you can, you should use tagged VLANs for this kind of operation, or maybe not clone at all.

In the example that follows you find two Container Databases: CDB1 and CDB2. CDB1 currently has a PDB called swingbench1 which needs to be cloned as swingbench2 under CDB2. Before spending time on the clone operation you need to ensure that the CDBs use the same character set. In Oracle 12.1 a clone operation from a remote database requires identical character sets, as does the plug operation as you will see shortly. Additionally, you need to ensure that the database options between the source and destination CDB are identical.

As with any clone operation, the source has to be opened read-only. Again, make sure that all the stakeholders of the application are aware of that operation so that applications can be shut down cleanly prior to the event. Before shutting down the source, create a database link to the CDB$ROOT of your source, then enter the known command “create pluggable database clone Name from sourceName@dblink.

And you cloned a PDB over the network which opens interesting possibilities.

Moving PDBs between different containers

Another way to add a PDB to a system is to unplug it on the source and plug it into the destination. The best use case for this is the application of a patch on the source CDB. Instead of suffering downtime while the CDB is patched, you could easily unplug/plug the PDB from the database to be patched to another, already patched CDB.

Oracle has developed a procedure to facilitate that process, and it is based on an XML document that travels with the PDB to be moved. The API gives you a great deal of flexibility over the process. Once a PDB is unplugged, it is no longer associated with the CDB it was last plugged into but nevertheless still listed as you will see in a bit. The unplug command is remarkably simple, and the only variables to it are the user-PDB name and the location of the XML file describing it. To export user-PDB pdb5 for example, you can issue this command:

SQL> alter pluggable database pdb5 unplug into '/home/oracle/pdb_ddl/pdb5.xml';
 
Pluggable database altered.

For it to succeed the PDB to be exported must be in mount state. When choosing a name for the XML file, you should make it obvious which database it describes. The database (unique) name for example is a great example for a file name. After the XML file has been created, the PDB is not removed from the list of PDBs in v$pdbs. The XML file describing the PDB contains information about the PDB’s tablespaces, and some general information required for plugging the database into a CDB. The below is an excerpt from the XML file describing pdb5:

<?xml version="1.0" encoding="UTF-8"?>
<PDB>
  <pdbname>PDB5</pdbname>
  <cid>5</cid>
  <byteorder>1</byteorder>
...
  <guid>E732F81E6B640DC7E0436638A8C03EB1</guid>
...
  <tablespace>
    <name>SYSTEM</name>
    <type>0</type>
    <tsn>0</tsn>
    <status>1</status>
    <issft>0</issft>
    <file>
      <path>+DATA/CDB1/E73[...]8C03EB1/DATAFILE/system.275.827060209</path>
      <afn>30</afn>
      <rfn>1</rfn>
      <createscnbas>2390117</createscnbas>
      <createscnwrp>0</createscnwrp>
      <status>1</status>
      <fileblocks>34560</fileblocks>
      <blocksize>8192</blocksize>
      <vsn>202375168</vsn>
      <fdbid>3887550129</fdbid>
      <fcpsw>0</fcpsw>
      <fcpsb>2392086</fcpsb>
      <frlsw>0</frlsw>
      <frlsb>2256999</frlsb>
      <frlt>826900946</frlt>
    </file>
  </tablespace>
...
  <tablespace>
...
</tablespaces>

There are more entries for each tablespace in the PDB as well as all the database options, whether or not Data Vault or Label Security are in use and important initialization parameters. The XML file name is a very important piece of information for the remaining process as you will see next. Before you can plug a PDB into a CDB, the following requirements have to be met in Oracle 12.1:

  • The source and destination CDB must have the same endianness (check field “byteorder” in the above XML file). In other words, you cannot directly transport a PDB from a CDB on Solaris SPARC into a PDB on Linux x86-64.
  • The PDB must have the same or compatible character set, and also the same national character set. Do you see the value of standardization here?
  • The destination CDB must have the same options installed in the data dictionary as the source CDB if the PDB makes use of them. You can view the options used by the PDB in the XML file describing it in the tag <options>. The option is listed alongside its version. Use DBMS_PDB.CHECK_PLUG_COMPATIBILITY() to check for compatibility of PDB and CDB options.

The API for plugging the exported PDB into a CDB gives you a great deal of freedom over the process. The basic command is the now well-known “create pluggable database” statement used previously, with the extension of the “using” clause. Depending on how you stored the unplugged PDB, and whether you want to clone the source or store it in a different location you have many options.

  • To create the new user-PDB as a clone. The “as clone” clause to the “create pluggable database” command instructs Oracle to generate new identifiers for the PDB to be plugged in. Most importantly, it will ensure that the new PDB is different from the source PDB from an Oracle point of view.
  • If you moved the unplugged PDB physically, or made it available via an NFS mount for example, then the XML file does not represent the file location of the PDB to be plugged in. Instead of editing the XML file manually, you should use the source_file_name_convert option. It is exactly the same as the file_name_convert clause already explained earlier in the chapter.
  • If you have already copied or moved the PDB’s files to their final location, then you can instruct Oracle not to copy them from the source location indicated in the XML file, which it would be otherwise the default. Alternatively, you can specify the move keyword to move- and not copy- the files to the new location.
  • In addition to specifying the source_file_name_convert parameter, you can also tell Oracle where to move files to using the file_name_convert clause as already explained.

As you can see, the number of options available to detail what should happen with an unplugged PDB is quite large. In addition to the ones mentioned here you can also specify the storage clause, which has already been discussed in the section “Create a PDB from the seed database”. The options to use depend on your situation. If you are using Oracle Managed Files, then you most likely do not need to consider the file_name_convert clause—the OMF naming convention will take care of the destination location. Years of experience working with Oracle ASM have taught the author that it is often best not trying to outsmart ASM in trying to micro-manage the file names and locations.

You are given great flexibility as to specifying where a file comes from, and where it should be moved to. Assume for example that the data files for the production PDB “vipdb1” is described similar to the one that follows below in the XML document:

...
      <path>/u01/oradata/pdbs/vipdb1/system01.dbf</path>
...

You could further assume that the files pertaining to the PDB have been copied via sftp to a staging location on the UAT database server: /m/transfer/pdbs/vipdb1/. Both production and UAT server use Linux file systems to store Oracle data files. Using that information, you’d have to use the following syntax to plug the PDB into the UAT CDB:

SYS@CDB$ROOT> get create_pdb_plugin.sql
  1  create pluggable database vipdb1uat as clone
  2   using '/home/oracle/vipdb1.xml'
  3  source_file_name_convert=(
  4   '/u01/oradata/pdbs/vipdb1','/m/transfer/pdbs/vipdb1')
  5  file_name_convert=(
  6   '/m/transfer/pdbs/vipdb1','/u01/oradata/CDB1/VIPDB1UAT')
  7* tempfile reuse
SQL> /
 
Pluggable database created.

Another interesting use case could be the migration from file system ASM. This has been a simple operation in the past involving image copies of the data files to which the database was switched over to. With PDBs this has become a slightly different operation as shown here:

SQL> create plugable database asmpdb2 using '/m/transfer/pdbs/fspdb1.xml'
 2   copy source_file_name_convert = ...
 3   /

The destination CDB has its db_create_file_dest initialization parameter set to ‘+DATA’, allowing the PDB to be copied into ASM-disk group ‘+DATA’.

Plugging a non-CDB into a CDB

Plugging in a non-CDB is your opportunity to move databases just migrated to Oracle 12.1, or other databases that haven’t been created as Container Databases into the consolidation platform. For this scenario you can make use of another package created by Oracle, called DBMS_PDB. The procedure of plugging in a non-CDB into a CDB is very similar to the one just described in the section “Moving PDBs between different containers”.

To plug the non-CDB into the container as a Pluggable Database a few steps need to be followed. In the first step you use the DBMS_PDB package’s describe() function while connected to the non-CDB to create the XML meta data file. The file can only be created if the non-CDB is opened read-only. Shut the non-CDB database down after the XML file has been created successfully in preparation for the next step.

Connect to the CDB you want to use as the non-CDB’s container and use the “create pluggable database” statement to plug the database into the new container. Refrain from opening the new PDB just yet. Connect to the PDB and execute the noncdb_to_pdb.sql script in $ORACLE_HOME/rdbms/admin to complete the conversion to a PDB.

This section has been kept short deliberately: You can read more about plugging in a non-CDB into a CDB in Chapter 12, complete with an example.

Connecting to Pluggable Databases

Connections to PDBs are based on the service created automatically when the PDB is opened. The service name equals the PDB name and is created and registered with the listener. This has two important implications:

  1. You need to use net services to connect to the PDB. Operating System authentication (using “ops$-” accounts) is not possible
  2. EZConnect will prove really useful
  3. If you have two CDBs on your host and both have a PDB named PDB1 then you cannot reliably connect to one or the other. Such a configuration should be avoided

Consider the following example of a listener started out of the Grid Infrastructure home, using the default values:

[oracle@server1 ∼]$ lsnrctl service
 
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 25-SEP-2013 13:50:25
 
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
[...]
Service "CDB1" has 2 instance(s).
  Instance "CDB1", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0
         LOCAL SERVER
  Instance "CDB1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
[...]
Service "pdb1" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
[...]
The command completed successfully

Note the services PDB1 in the above output belonging to CDB1. When you open the PDB either read-only or read write, you will see that the service is automatically created and started for you.

Some output has been removed for the sake of clarity. With this in place it is possible to connect to the PDB as you would to any other service before Oracle 12.1. The obvious choice is to create a net service name for the user-PDB:

PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =server1.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )

Now you can connect to the PDB.

In addition to the default service name you can of course create additional services for your PDB. Either use DBMS_SERVICE for this purpose—or if you are on ASM—create the service with srvctl:

[oracle@server1 ∼]$ srvctl add service -db CDB1 -service soe -pdb swingbench1

Note the new “pdb” flag in the above command. When the service is created you can start it to allow user connections.

A CASE FOR EZCONNECT

Many sites I visited over the years were against the use of EZConnect, except perhaps for those running Real Application Clusters 11.2, where the “remote_listener” parameter makes use of the EZConnect syntax.

EZConnect shortens the URL required to connect to a database, and very often becomes a time saver when trying to connect to a database service for which no tnsnames.ora entry exists, and also where change management would prevent a change to the file in time for the project. So instead of using the “connect user@netServiceName syntax, you could use this instead:

SQL> connect soe/[email protected]/swingbench1

You can even specify a different listener port if needed:

SQL> connect soe/[email protected]:1523/swingbench1

The reason I heard most often as to why EZConnect was banned was security. However, EZConnect does not add a security problem; it has always been possible to connect to the database service without a service name defined in tnsnames.ora. So instead of using the more convenient statement shown above, you could use this rather long connection string instead:

Enter user-name: soe/passwd@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) shift-enter.jpg
(HOST=
server1.example.com)(PORT=1521)) shift-enter.jpg
(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=swingbench1)))

As you can see, EZConnect should not pose an additional threat. Nevertheless these discussions need to be had with your security department, the above can only be an example.

Moving within the CDB

During the preparation of this chapter I found myself switching from one PDB to another and back to the CDB$ROOT quite a lot. When using EZConnect this quickly turns into a lot of typing. If you are connected as a common user (see section “Users and Roles” further down in this chapter for more information about common and local users) and possess the “SET CONTAINER” system privilege then you can move vertically within the CDB quite easily. Many DBAs still type the “sqlplus / as sysdba” blindly into their terminal session and end up in the root container. Rather than using the EZConnect syntax “connect system/xxx@localhost:listenerPort/servicename” the easier variant is to set the container appropriately as shown here.

SQL> alter session set container = SWINGBENCH1;
 
Session altered.
 
SQL> select sys_context('userenv', 'con_name') from dual
  2  /
 
SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------
SWINGBENCH1

There is an important implication to this command as compared to the network connection: any login trigger defined does not fire after the alter session command has completed (there are workarounds). The container names can be found in V$PDBS.NAME, with the exception of the root, which is referred to as CDB$ROOT.

PDB-specific initialization parameters

Initialization parameters can be set in different places in the Oracle 12c database. Most are applicable for the whole Container Database, but some can be set specifically for a PDB. Most of the session-modifiable parameters can be set at the PDB level, while connected to the PDB. To find out which parameters are specific to a PDB, you can query the system:

SQL> SELECT NAME FROM V$PARAMETER WHERE ISPDB_MODIFIABLE='TRUE' ORDER BY NAME;

At the time of this writing there were 171 PDB-specific parameters that could be set. They were persistent across lifecycle changes within the PDB. Earlier in this chapter you saw how you could transfer PDB-specific initialization parameters as part of the unplug/plug operation.

Considerations for hosted environments

With all this information you now have in your hands, you can make a lot of practical use in your database consolidation project. Instead of creating individual schemas in a database and live with the limitations of multi-tenancy in Oracle 11.2 and before you can now create new environments very quickly. The below example demonstrates how quickly you can create a new PDB from the seed database residing on ASM:

SQL> set timing on time on
22:53:34 SQL> begin
22:53:35   2   for j in 10..20 loop
22:53:37   3     execute immediate 'create pluggable database pdb' || j ||
22:53:39   4      ' admin user pdbadmin identified by secret';
22:53:40   5   end loop;
22:53:42   6  end;
22:53:43   7   /
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:02:20.48

As you can see from the timing information this anonymous block was completed in a few minutes. The time to deploy a new environment for a requesting business unit in your organization matches the time it takes to execute the request to create a schema in a pre-12.1 database. Even though this process is already fast, it can be further enhanced by using another automated database build procedure.

Assume for example that your build process creates a CDB as described but doesn’t stop there. You could have a PDB all set up to match your organization’s standards on a shared location in your network exported via NFS. You can clone and plug the reference PDB into the new CDB as a reference for future user (PDBs). Or maybe create a CDB with common users that are required by the database management and monitoring tools. That way you reduce the latency of cloning a PDB over the network and pay the price of over-the-network-cloning only once. Yet you have a fully working reference PDB you can clone locally.

Chapter 3 presents a few considerations about storage classes or tiers. Using the file_name_convert clause in the “create pluggable database” statement you can cater for these as well and create the PDB on premium storage if so requested.

This leaves one concern to the administrators: how can Oracle ensure that a “runaway” PDB does not take all resources of the server and thus starves out everybody else of resources? This question has been anticipated by Oracle in the requirements design phase and as a result Database Resource Manager (DBRM) has been enhanced to deal with inter-PDB resource requirements. Later in this chapter you can read how you could start managing resources within a CDB.

Opening and closing PDBs

You read earlier in the chapter that a PDB is not accessible immediately after its creation. This is how the process has been defined. The lifecycle states for a PDB are:

  1. MOUNTED: the Pluggable Database is not accessible for users, and information about it will not be displayed in the CDB% views.
  2. OPEN read-only: accessible for queries, and ready to be cloned, but no modifications possible. The PDB must specifically be opened with the “read only” clause to set it into this mode.
  3. OPEN read write: the PDB is available for users. This is the default state when opening the PDB.
  4. OPEN restricted: opens the PDB read write, but allows only uses with the “restricted session” privilege to connect.

Additionally, after each restart of the CDB, all PDBs currently plugged in will be back in MOUNTED mode irrespectively of their previous open mode. Once the CDB is opened, you can open your PDBs. Luckily the syntax for doing so is straightforward, and it is possible to open all PDBs in a single command.

SQL> alter pluggable database all open;

You can even instruct Oracle to open all PDBs except for some you don’t intend to open. The exception of the rule is the seed database, which will always be open read-only, ready to be cloned.

SQL> alter pluggable database all EXCEPT PDB5 open;
 
Pluggable database altered.

The open mode of the PDB has to be compatible with the open mode of the CDB however. In other words, if you opened the CDB in read-only mode, then you cannot open a PDB in read write mode.

Since Oracle won’t automatically open PDBs, a little bit of code in combination with a startup trigger can help in this situation. Consider the following sample:

SQL> create table pdbtab tablespace users
  2  as select name, 'YES' autostart, 'RW' openmode
  4 from v$pdbs
  3  where name <> 'PDB$SEED';
 
Table created
 
SQL> alter table pdbtab add constraint c_autostart
  2  check (autostart in ('YES','NO'));
 
Table altered
 
SQL> alter table pdbtab add constraint c_openmode
  2  check (openmode in ('RO','RW'));
 
Table altered.

With the “pdbtab” table in place, it is possible to define a trigger that fires after the database has been started. It is possible to read the list of PDBs in the table “pdbtab”, and execute a piece of dynamic SQL to open the PDB according to the requested open mode. A possible trigger could be written like this one:

SQL> create or replace trigger autoPDBtrig
  2  after startup on database
  3  declare
  4    v_role v_$database.database_role%type;
  5  begin
  6    select database_role into v_role from v$database;
  7    if v_role = 'PRIMARY' then
  8      for i in (select name,openmode from pdbtab a where exists
  9                (select 1 from v$pdbs b where b.name = a.name)
 10                 and autostart='YES')
 11      loop
 12        execute immediate 'alter pluggable database ' ||
 13          i.name || ' open ' || case when i.openmode = 'RO'
 14                                then 'read only' end;
 15      end loop;
 16    end if;
 17* end;
SQL> /
 
Trigger created.

The trigger first checks if the database is in the primary role. If not then PDBs should probably not be opened. Following this verification the code traverses all the entries in PDBTAB for PDBs that exist in the CDB, with the intention of opening them accordingly. This way you have all your PDBs open when the CDB started in a user-defined fashion. You need to add an entry to the pdbtab table after deploying a new PDB or it will not be taken into consideration during the start of the CDB.

The syntax for closing PDBs is very similar. Like with opening PDBs, you can name individual PDBs to be closed. Alternatively you could instruct Oracle to close all of them using the “alter pluggable database all close” command. This will wait for user sessions to disconnect, similar to a “shutdown” command in the non-CDB. To force users to disconnect, specify the “immediate” keyword. This can be useful if you have a short change window and need users to get off the PDB quickly in order to prepare it for cloning.

Users and roles in the context of a PDB

As you can imagine, the introduction of the Container Database brings with it changes in the security model. The most profound change impacts the database user account, and database roles. Users in Oracle 12.1 are either common users, or local users. All Oracle-supplied users are common users. Figure 7-7 shows the relationship between common and local users.

9781430244288_Fig07-07.jpg

Figure 7-7. Common users and local users compared

Common users are common to the whole CDB, and implicitly to all PDBs currently plugged into the root container. They inherit the same password, and some Oracle-maintained accounts can also execute the “alter session set container” command to quickly change from one container to another without having to reconnect. Non-Oracle maintained common users need to be granted the relevant privileges. Common users will also be created for all future PDBs.

Now that poses an interesting question for your non-Oracle monitoring accounts: will you create them as common users in a dbca-template, or do you create a PDB template with the monitoring account created locally? Many factors favor the common account, which only needs to be maintained once when it has been locked or the password needs to be reset.

Creating common users

To create a common user you need to be connected to the root container, or CDB$ROOT as a common user with appropriate privileges. A new addition to the “create user” and “grant” command, called “container” allows the creation of common users. Common users must be prefixed with c## or otherwise you will receive an error. Take this example for user c##sysbackup:

SYS@CDB$ROOT> create user c##sysbackup identified by secret
  2  container = all;
 
User created.
 
SYS@CDB$ROOT> grant sysbackup to c##sysbackup;
 
Grant succeeded.

The “container“-clause seems optional, since all users created in the root are common users by default. Using it makes the purpose more visible though, especially in automation where someone reading your code can immediately grasp the intention. Thankfully Oracle prevents the creation of non-common users in the root container:

SYS@CDB$ROOT> create user localuser identified by secret container = current;
create user localuser identified by secret container = current
                                    *
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

The newly created user exists in all attached PDBs, and could theoretically connect to all of them, if they have the appropriate permissions. There is no further change to the way that a database user is created in Oracle 12.1. You still assign quotas to users, profiles, default permanent, and temporary tablespaces. Note that the tablespaces must exist in the PDBs or you will get an error similar to this one:

SYS@CDB$ROOT> alter user c##donexistinnbcd default tablespace users;
alter user c##donexistinnbcd default tablespace users
*
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in
           pluggable database PDB4
ORA-00959: tablespace 'USERS' does not exist

image Note  User profiles, auditing, and security considerations are not in the scope of this chapter. You need to ensure that the relevant standards your information security team comes up with are implemented and enforced.

What was true in Oracle before 12.1 is true for the current release: you cannot simply connect to a container without the “create session” privilege. This is true for the container the user was created in, and any additional containers. The only exception is if a user is granted the very powerful “set container” system privilege. Due to the way common users are organized, there cannot be more than one user with the same name in the database. In this respect common users behave exactly like users in a non-CDB.

Common users can be granted the rights to query information about currently attached PDBs, without actually having to connect to them. This is useful for monitoring accounts where a common user is used by the monitoring software. The “alter user” command has been extended for common users, and it is possible to specify a container_data clause. In this new clause you can specify which objects a common user should have access to. The following example walks you through the creation of such a user. The plan is to grant the user access to v$session without the privilege to connect to PDB1 directly.

SYS@CDB$ROOT> create user c##perfuser identified by secret ...
 
User created.

Once the user is created, you can grant it the right to view information in v$session for PDB “pdb1”. The CDB has three user-PDBs in total.

SYS@CDB$ROOT> alter user c##perfuser
  2   set container_data = (CDB$ROOT, PDB1)
  3   for v_$session container = current;
 
User altered.

The “set container_data” clause allows the user “c##perfuser” to query from v$session for information pertaining to “pdb1”. Remember you need to be connected to the root, and you need to limit the command to the current container. Let’s assume that the “c##perfuser” has a select privilege on v_$session:

SYS@CDB$ROOT> grant create session to c##perfuser;
 
Grant succeeded.
 
SYS@CDB$ROOT> grant select on v_$session to c##perfuser;
 
Grant succeeded.

With the above modification the common user can query the view v$session for “pdb1” with con_id of 3.

C##PERFUSER@CDB$ROOT> select con_id, username from v$session
  2 where type='USER' and username <> 'SYS';
 
    CON_ID USERNAME
---------- ------------------------------
         1 C##PERFUSER
         3 USER1

The “set container_data” clause is available as part of the alteration of users. The above example has been quite specific: the intention was to only grant the privilege to select from v$session for sessions belonging to “pdb1”. You can find this stored in the view DBA_CONTAINER_DATA:

SYS@CDB$ROOT> select * from dba_container_data
  2  where username = 'C##PERFUSER';
 
USERNAME             D OWNER OBJECT_NAME     A CONTAINER_NAME
-------------------- - ----- --------------- - --------------------
C##PERFUSER          N SYS   V_$SESSION      N CDB$ROOT
C##PERFUSER          N SYS   V_$SESSION      N PDB1

If you would like to grant the same privilege to all current and future PDBs, then you could have used the following command:

SYS@CDB$ROOT> alter user c##perfuser set container_data = all
  2  container = current;
 
User altered.

This allows you to view information for all container object data views across all PDBs. This should be used with care for the obvious reasons. To undo the grant to “all” you have to use the following command:

SYS@CDB$ROOT> alter user c##perfuser set container_data = default
   2  container = current;

In summary, the use of the “container_data” clause might be more secure then granting access to the PDB as a whole. Think of it as it in the same terms as of a Virtual Private Database. Your deployment process however must ensure that the container data clause includes any new PDBs. You can check which privileges have been granted by querying the DBA_CONTAINER_DATA view.

Creating local users

As the name implies, local users are local to a PDB, and a major consolidation feature within Oracle 12.1. In previous releases it was simply not possible to have multiple users in the same database, but now that is perfectly possible, although these users have to be in different PDBs. What you cannot have however is a global user with the same name as the local user. Each of the local users owns objects independently of any other PDB. A local user must be created in the current container: the “container” clause doesn’t take the name of a PDB as an argument.

image Note  Remember that you cannot use operating system authentication for local users in a PDB.

Connect to the PDB as a common user with the DBA role such as SYSTEM or SYS and then create the user as you would in a non-CDB. Again, the same clauses apply to create a user in the PDB as there are in the CDB and discussed before. A more complete example for a user creation is shown below:

SQL> create user luser identified by luserpwd
  2  default tablespace users
  3  temporary tablespace temp
  4  quota unlimited on users
  5  profile appSpecificProfile
  6  password expire
  7  account unlock
  8  container = current;
 
User created.

Checking the CDB_USERS view in the CDB root you will see that the user belongs to the container it was created in only.

The PDB admin user is a special kind of local user. It is created as part of the clone from the seed database and is automatically granted the PDB_DBA role. As you will see in the next section, the privileges granted to the role differ between PDBs, the PDB_DBA role is a common role.

SYS@CDB$ROOT> select common from dba_roles where role = 'PDB_DBA';
 
COM
---
YES

By default, and in the absence of any dedicated grants, the admin user only has a few privileges. If for example you created the PDB as shown in this example:

SYS@CDB$ROOT> create pluggable database pdb1
  2 admin user pdbadmin identified by secretPassword;
 
Pluggable Database created.

The role allows the admin user to connect to the database and perform some administration tasks. Its power is limited by default, if you need more you need to make use of the “roles” clause in the PDB creation to ensure that the admin user really deserves the name. Examples of how to make use of the roles-clause have been provided earlier in this chapter. Please refer back to the section “Create a PDB from the seed database” for more information and examples.

Local users cannot by definition create common roles—they’d have to connect to the CDB$ROOT to do so which they simply cannot. However a local user can grant a common role such as connect to another local or common user if the user has the right privileges to do so.

Common roles

Similar to users, roles can be common or local. When you are connected to the root container and create a role, it is automatically assumed that the role is a common role. Even if you tried to create a local role in CDB$ROOT, you cannot: Oracle prevents you from doing so, with exactly the same ORA-65049 error message as if you tried to create a local user. Just like common users names the common role name must begin with C##.

Many Oracle-supplied roles are common roles, which makes it easy to create common users with a common set of privileges. It also allows you to manage roles centrally in one place. When granting a role to a role or user, it is important to pay attention to the container you are granting the role in. For example, you can grant a common role to a user or role in a PDB, and this grant is valid in the container only. Consider this example. First a common user “C##GUSER” is created in the root container, without any privileges.

SYS@CDB$ROOT> create user c##guser identified by secret;
 
User created.

Next, the SYS user that was used to create GUSER in the first place switches his session context to a different container: a PDB named PDB1 and grants the (common) connect role to the new account:

SYS@CDB$ROOT> alter session set container=pdb1;
 
Session altered.
 
SYS@CDB$ROOT> @$HOME/login
SYS@PDB1> grant connect to c##guser;
 
Grant succeeded.

Another grant is executed in PDB “DB2”, this time the DBA role is granted to the C##GUSER account. Note that even though C##GUSER is a common user, it currently cannot connect to any other PDB than PDB1 and PDB2. Neither can it connect to CDB$ROOT. Connecting as C##GUSER to PDB1 you can see that the user really only has the connect role granted by checking the session_privs view:

select count(1) from session_privs;
 
  COUNT(1)
----------
         1

Likewise, the granted privileges in the PDB2 database match the DBA role, and the view session_privs returns 221 rows.

Local roles

Local roles are best compared to roles in Oracle before 12.1. In fact, they behave exactly as before. This is mainly thanks to the promise Oracle made of backward compatibility. Local roles have a defined namespace within the PDB they are created in. Similar to the local user, multiple roles with the same names can exist in different PDBs.

Playing nicely with others

One of the major concerns of staff responsible for managing production databases are rogue, or “runaway” queries or databases which can have a severely negative impact on the other users on the same hardware. Oracle addresses the “runaway” session and database over the many database releases now, and 12.1 is no exception to the rule.

There are a number of weapons in the DBA’s arsenal to prevent resource starvation. Some are based on the operating system, and others are based on the Oracle kernel. Some operating systems, as well as some virtualization solutions allow you to create lightweight operating system images on to which a database home is installed. Most often you would set resource limits on the virtualized server in the form of memory and CPU. Ideally, this technology integrates with the base operating system to allow for better NUMA-awareness. What Oracle kept telling us users over time was that Operating System scheduling has the downside that it’s difficult to quantify, for example by looking at Automatic Workload Repository (AWR) reports. Is the high CPU usage now down to the Oracle instance being extremely busy, or is it down to a resource capping operating system? Like Jonathan Lewis famously said about “know your data,” the same principle applies in this situation: know your environment.

The use of Oracle provided technology allows us to benefit from the instrumentation of the kernel in form of the wait interface. Before Oracle 12c it has been possible to use the Database Resource Manager (or DBRM) to manage resource consumption within the same database. The Exadata-only I/O Resource Manager furthermore allows the administrator to regulate I/O bandwidth between databases. Instance Caging allows you to limit the CPU usage of a specific database instance. Let’s have a look at these features in the next sections.

image Note  I/O Resource Manager will not be covered in this chapter, for more information about the IORM please refer to Expert Oracle Exadata, also from Apress.

Overview of Resource Manager in Oracle 12.1

Before launching into the description of the changes to Resource Manager in 12c, a little bit of background knowledge is necessary. On many site visits I noticed that Resource Manager was not implemented besides the standard deployment by Oracle. This does not do the technology justice, but on the other hand the problem is exactly the way I saw it on site: the business users have to analyze their application and define the usage of Resource Manager in their database. On many occasions “the business” however feels uncomfortable making this decision. Hopefully the following paragraphs allow them to make a more informed decision, based on the facts.

What exactly is that Database Resource Manager (DBRM) then? In the simplest form a definition could be as follows: the Database Resource Manager allows the administrator to use a variety of criteria to logically group users or workloads into a resource consumer group. A resource consumer group defines how many resources in a database can be assigned to users in the form of a resource plan directive. Since Oracle 10, users can be moved into a lower resource consumer group when they cross the threshold for their allowed resource usage. Beginning with Oracle 11, they can also be moved back to the initial consumer group once their downgraded calls are completed. This is especially useful in conjunction with connection pooling and web applications where one session can no longer be directly associated with an individual, as it was in the days of dedicated server connections and Oracle Forms applications. In the connection pooling scenario, the application simply grabs a connection out of the pool of available connections, performs its assigned task, and then returns the connection to the pool. Often these operations are very short in nature. Connection pooling offers a huge advantage over the traditional way of creating a dedicated connection each time a user performs an operation against the database, greatly reducing the overhead associated with establishing a dedicated server process. Before the Resource Manager can become active, it is important to define a mapping between a database user and the corresponding resource consumer group. Once that is established—most often based on the service name the user uses to connect—the resource plan can be created and activated. What you just read was the state-of-play before Oracle 12 and the same principles apply for a 12c non-CDB as well.

Thankfully Oracle extended DBRM to take Pluggable Databases into account. The new DBRM in Oracle 12.1 operates on two different levels, in a Container Database:

  • On the CDB level
  • On an individual user-PDB level

This allows the administrator to define the potential share of workload available to each PDB first before breaking the CPU quantum down to the individual PDB, where the inter-PDB Resource Manager will work its magic.

Resource Manager for the Container Database

The new extension to the Database Resource Manager in Oracle 12.1 allows the administrator to rank the importance of PDBs within a CDB. The new DBRM makes allowance for the fact that some PDBs are more important than others, or simply to ensure that no individual PDB can starve others for performance. DBRM is essential when it comes to enforcing service levels and predictable performance in the consolidated environment. Depending on the service purchased by the database user you could assign it into the Gold/Silver/Bronze class. Resource Manager is slightly different in a CDB compared with a non CDB. DBRM decisions are made on two levels: first on the CDB level, then on the individual PDB level. You will see a little later in the chapter that the syntax reflects this.

Instead of a Resource Plan you create a CDB plan in DBRM for the multi-tenant database. The CDB plan directives of concern for managing the resource consumption between PDBs are

  • Shares
  • Utilization limits
  • Parallel Execution (PX)

Shares are counters—the more you allocate the resources your PDB gets. Utilization limits are percentages and define how many CPU-percent your application is allowed to consume. Utilization limits are not cumulative for the CDB; each PDB can use up to 100%. Shares and utilization limits currently extend to CPU and parallel query.

To demonstrate the new resource manager features, three very small PDBs will be created for a quick Swingbench run, as shown here:

SYS@CDB$ROOT> create pluggable database swingbench1
  2  admin user pdbadmin identified by secret
  3  roles=(dba)
  4  default tablespace soe datafile size 100m
  5   autoextend on next 100m maxsize 10g;
 
Pluggable Database created.
 
SYS@CDB$ROOT> alter pluggable database swingbench1 open;
 
Pluggable Database altered.

In the next step the Order Entry schema creation wizard is used to populate the SOE schema in the newly created pluggable database. Once the test data has been created, the PDB is cloned to swingbench2 and swingbench3, using the same technique as demonstrated earlier in this chapter.

SQL> create pluggable database swingbench2
  2  from swingbench1;
  3  /
 
Pluggable database created.

image Note  In the absence of specific directives, the following defaults will be applied: each PDB will get exactly one share, and can use 100 percent of CPU and parallel query. That is, unless you decide to change the defaults using the update_cdb_default_directive() procedure in the dbms_resource_manager package.

Creating the CDB plan

With three PDBs in place serving the same purpose the administrator needs to make a choice which one he or she wants to prefer. Let’s assume for this scenario that database swingbench1 is twice as important as swingbench2 and swingbench3. The shares will be distributed according to the PDB importance, and parallel query will be forbidden (swingbench is a OLTP-style workload). There is no limit for individual PDB’s CPU utilization. To implement this resource plan, the following code has been executed in the root:

begin
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA;
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan => 'SWINGBENCH_PLAN'),
 
  -- set the shares for PDB swingbench1. Prevent
  -- parallel query
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan => 'SWINGBENCH_PLAN',
    PLUGGABLE_DATABASE => 'swingbench1',
    SHARES => 4,
    UTILIZATION_LIMIT => null,
    parallel_server_limit => 0);
 
  -- repeat for swingbench2, reduce the number of
  -- shares.
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan => 'SWINGBENCH_PLAN',
    PLUGGABLE_DATABASE => 'swingbench2',
    SHARES => 2,
    UTILIZATION_LIMIT => null,
    parallel_server_limit => 0);
 
  -- and for swingbench3
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan => 'SWINGBENCH_PLAN',
    PLUGGABLE_DATABASE => 'swingbench3',
    SHARES => 2,
    UTILIZATION_LIMIT => null,
    PARALLEL_SERVER_LIMIT => 0);
end;
/
 
PL/SQL procedure successfully completed.

Validating and enabling the CDB Plan

With the pending area now created and our directives inside, you need to validate the pending area before you can submit it. This is often performed in two steps:

SYS@CDB$ROOT> begin
  2  dbms_resource_manager.validate_pending_area();
  3  end;
  4  /
 
PL/SQL procedure successfully completed.
 
SYS@CDB$ROOT> begin
  2  dbms_resource_manager.submit_pending_area;
  3  end;
  4  /
 
PL/SQL procedure successfully completed.

That’s one CDB resource plan created. A resource plan is enabled by modifying the initialization parameter resource_manager_plan in the root, as shown here:

SYS@CDB$ROOT> select value from v$parameter
  2  where name = 'resource_manager_plan';
 
VALUE
------------------------------------------------------------------------
 
SYS@CDB$ROOT> alter system set resource_manager_plan = 'swingbench_plan';
 
System altered.
 
SYS@CDB$ROOT> select value from v$parameter
  2  where name = 'resource_manager_plan';
 
VALUE
------------------------------------------------------------------------
swingbench_plan
 
SYS@CDB$ROOT>

If it should turn out that the effect on your system is catastrophic, you could quickly revert back to the old status by executing a corresponding alter system set resource_manager_plan reverting back to the old one.

Be careful with the automatic maintenance windows and scheduler jobs though. Oracle has resource plans associated with maintenance windows that can enable a different resource plan. The default windows’ definition can be obtained from the dba_scheduler_windows dictionary view.

If the resource plan has changed because of a maintenance window you can see output similar to this in the v$parameter view:

SYS@CDB$ROOT> select value from v$parameter
  2  where name = 'resource_manager_plan';
 
VALUE
--------------------------------------------------
SCHEDULER[0x4211]:DEFAULT_MAINTENANCE_PLAN

You could do the same and create your own windows and associate a resource manager plan with them or use the “force” keyword to prevent a resource plan from being replaced.

Viewing information about the CDB plans

A new class of views introduced in Oracle 12.1 allows you to view information about the CDB plans. The views all fall into the DBA_CDB% class of views. At the time of this writing the following views existed:

  • DBA_CDB_RSRC_PLANS
  • DBA_CDB_RSRC_PLAN_DIRECTIVES

You can query both to get more information about the plans defined in the CDB. After the creation of the swingbench plan, I had the following contents in the DBA_CDB_RSRC_PLANS view:

SYS@CDB$ROOT> select plan,comments,status
  2  from DBA_CDB_RSRC_PLANS;
 
PLAN                           COMMENTS                       STATUS
------------------------------ ------------------------------ ----------
DEFAULT_CDB_PLAN               Default CDB plan
DEFAULT_MAINTENANCE_PLAN       Default CDB maintenance plan
SWINGBENCH_PLAN
ORA$QOS_CDB_PLAN               QOS CDB plan
ORA$INTERNAL_CDB_PLAN          Internal CDB plan

The plan directives used previously are exported in the DBA_CDB_RSRC_PLAN_DIRECTIVES view:

SYS@CDB$ROOT> select pluggable_database, shares, utilization_limit,
  2   parallel_server_limit
  3   from DBA_CDB_RSRC_PLAN_DIRECTIVES
  4  where plan = 'SWINGBENCH_PLAN'
  5  order by PLUGGABLE_DATABASE;
 
PLUGGABLE_DATABASE            SHARES UTILIZATION_LIMIT PARALLEL_SERVER_LIMIT
------------------------- ---------- ----------------- ---------------------
ORA$AUTOTASK                                        90                   100
ORA$DEFAULT_PDB_DIRECTIVE          1               100                   100
SWINGBENCH1                        4                                       0
SWINGBENCH2                        2                                       0
SWINGBENCH3                        2                                       0

You will undoubtedly have noticed the ORA$AUTOTASK and default PDB directive in addition to the explicitly created directives. These two—which are not explicitly covered in detail in this chapter—allow you to grant new PDBs a different default weight in the case of the ORA$DEFAULT_PDB_DIRECTIVE. You change it using the update_cdb_default_directive() function in DBMS_RESOURCE_MANAGER. As you already know, Oracle introduced certain automatic maintenance tasks with 11g Release 1. Since these are also under control of the resource manager they adhere to the same limits. If you’d like to change the priority of the automatic maintenance tasks then you can change the ORA$AUTOTASK directive if you like.

Resource Manager for the Pluggable Database

The DBRM within the PDB is similar to how it used to before Oracle 12.1. Again, it allows the administrator to arrange computing resources for resource consumer groups in the form of mapping of users to plan directives, all wrapped up in resource plans. You can easily expand on the above example, which instructed the Oracle kernel to arrange the available resources between PDBs. Once the individual PDB has received its resource quantum, it is up to the DBRM to decide which users should have priority treatment.

There are some restrictions in the current release of the software, which you need to be aware of. The first restriction is related to sub-plans. The plan_or_subplan argument to the create_plan_directive() function in DBMS_RESOURCE_MANGER allows you to specify a subplan instead of a resource consumer group. The result is a nested resource plan that can be difficult to understand or debug. With the current release you cannot assign subplans. The number of consumer groups is restricted to a maximum of eight.

To enable a resource plan for a PDB, there has to be a resource plan in use in the CDB. This is a similar requirement to Instance Caging (see next section), which also requires a Database Resource Plan to be present.

Creating resource plans for the PDB

Resource plans for PDBs are created while connected to the PDB with an administrative user. Depending on how you created your user-PDB, you might have granted the admin user the necessary privileges already. Alternatively connect as a common user to the PDB directly. The following example makes use of the SYSTEM account to create a simple resource plan.

The Database Resource Manager is a lot more powerful and offers a lot more options than I can reasonably present in this chapter. If you want to learn more about it I recommend you read the relevant sections in the Administrator’s Guide. This example focuses on running swingbench in the three PDBs created. The focus for this benchmark is to allow the SOE account, who by default owns the order entry schema, to execute his tasks without being preempted. To that effect, a new resource consumer group “swingbench” is created (note the connection to the PDB!)

SYSTEM@SWINGBENCH1> begin
  2    dbms_resource_manager.clear_pending_area;
  3    dbms_resource_manager.create_pending_area;
  4    dbms_resource_manager.create_consumer_group(
  5      consumer_group => 'SWINGBENCH_GROUP',
  6      comment => 'used for the execution of order entry benchmark'),
  7    dbms_resource_manager.validate_pending_area;
  8    dbms_resource_manager.submit_pending_area;
  9  end;
 10  /
 
PL/SQL procedure successfully completed.

The mandatory SYS_GROUP and OTHER will be part of the resource plan too. Let’s create the plan, using the CPU method of ratio for a simple enough plan:

SYSTEM@SWINGBENCH1> begin
  2    -- clear the pending area and create a new one
  3    DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
  4    DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA;
  5
  6    -- create the plan-specific to PDB swingbench1. This
  7    -- is a simple example, using RATIOs internally.
  8    -- The goal is to have the following ratios:
  9    -- 1:2:3:5 for low group, other groups, swingbench group
 10    -- and sys group.
 11    DBMS_RESOURCE_MANAGER.CREATE_PLAN(
 12      plan => 'SWINGBENCH_PLAN_SWINGBENCH1',
 13      MGMT_MTH => 'RATIO',
 14      comment => 'running swingbench on first PDB'),
 15
 16    -- create a plan directive
 17    DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
 18      plan=>'SWINGBENCH_PLAN_SWINGBENCH1',
 19      mgmt_p1 => 1,
 20      GROUP_OR_SUBPLAN => 'LOW_GROUP',
 21      comment => 'low group'),
 22
 23    DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (
 24      plan=>'SWINGBENCH_PLAN_SWINGBENCH1',
 25      MGMT_P1 => 2,
 26      GROUP_OR_SUBPLAN => 'OTHER_GROUPS',
 27      comment => 'others group'),
 28
 29    DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
 30      plan=>'SWINGBENCH_PLAN_SWINGBENCH1',
 31      MGMT_P1 => 3,
 32      GROUP_OR_SUBPLAN => 'SWINGBENCH_GROUP',
 33      comment => 'swingbench group'),
 34
 35    DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
 36      plan=>'SWINGBENCH_PLAN_SWINGBENCH1',
 37      MGMT_P1 => 5,
 38      GROUP_OR_SUBPLAN => 'SYS_GROUP',
 39      comment => 'sys group'),
 40
 41    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(
 42      attribute => DBMS_RESOURCE_MANAGER.ORACLE_USER,
 43      value => 'SOE',
 44      CONSUMER_GROUP => 'SWINGBENCH_GROUP'),
 45  end;
 46  /
 
PL/SQL procedure successfully completed.
 
SYSTEM@SWINGBENCH1>

The code clears and creates a pending area, and then starts off by creating the new resource plan. The following statements assign plan directives to the plan, which indicate the ratio of CPU to be used.

Validating and enabling the PDB resource plan

With the pending area firmly established, you need to validate it for logical errors in the plan description. The procedure is the same as for the CDB plan, namely a call to validate_pending_area():

SYSTEM@SWINGBENCH1> begin
  2  dbms_resource_manager.validate_pending_area;
  3  end;
  4  /
 
PL/SQL procedure successfully completed.

If that call doesn’t return any errors, you can submit the pending area:

SYSTEM@SWINGBENCH1> begin
  2  dbms_resource_manager.submit_pending_area;
  3  end;
  4  /
 
PL/SQL procedure successfully completed.

Great! With the resource plan in place, you can enable it as well.

SYSTEM@SWINGBENCH1> alter system set
  2  resource_manager_plan = 'swingbench_plan_swingbench1';
 
System altered.
 
SYSTEM@SWINGBENCH1> show parameter resource_manager_plan
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan                string      SWINGBENCH_PLAN_SWINGBENCH1

You can view the active resource managers and their settings using the V$RSRC_PLAN dictionary view.

SYS@CDB$ROOT> SELECT PDB.name PDB_NAME,
  2    PLN.name PLAN_NAME,
  3    PLN.IS_TOP_PLAN,
  4    PLN.CON_ID,
  5    PLN.PARALLEL_EXECUTION_MANAGED
  6  FROM V$RSRC_PLAN PLN,
  7    V$PDBS PDB
  8  WHERE pln.con_id = pdb.con_id
  9  /
 
PDB_NAME        PLAN_NAME                        IS_TO     CON_ID PARALLEL
--------------- -------------------------------- ----- ---------- --------
PDB$SEED        INTERNAL_PLAN                    TRUE           2 FULL
SWINGBENCH1     SWINGBENCH_PLAN_SWINGBENCH1      TRUE           5 FULL
SWINGBENCH2     SWINGBENCH_PLAN_SWINGBENCH2      TRUE           6 FULL
SWINGBENCH3     SWINGBENCH_PLAN_SWINGBENCH3      TRUE           7 FULL

In addition to the dynamic V$-views you will be notified in the alert.log if the plan changes:

Sat Sep 28 20:49:56 2013
Setting Resource Manager plan swingbench_plan_swingbench1 at pdb SWINGBENCH1
(5) via parameter
ALTER SYSTEM SET resource_manager_plan='swingbench_plan_swingbench1' SCOPE=BOTH;

Continuing with the implementation of the user-PDB plan you should be able to cap resource usage for your PDBs, avoiding conflict with your neighboring databases.

One final step remains: when the user SOE connects to the PDB, then it must be assigned to the appropriate consumer group. There are a number of ways to achieve this goal, the easiest is to map the user to the resource consumer group as shown here:

SYSTEM@SWINGBENCH1> begin
  2  dbms_resource_manager.create_pending_area;
  3  dbms_resource_manager.set_consumer_group_mapping(
  4    attribute => DBMS_RESOURCE_MANAGER.ORACLE_USER,
  5    value => 'SOE',
  6    consumer_group => 'SWINGBENCH_GROUP'),
  7  dbms_resource_manager.validate_pending_area;
  8  dbms_resource_manager.submit_pending_area;
  9  end;
 10  /
 
PL/SQL procedure successfully completed.

The success of this operation can be seen in the v$session.resource_consumer_group field and in dba_users.initial_rsrc_consumer_group. If for some reason you don’t see the correct consumer group in these, then you may need to grant the SOE user the privilege to switch consumer groups, as shown in this example:

begin
  DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA;
 
  -- grant the SOE user the privs to switch his consumer group
  DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP (
    GRANTEE_NAME => 'SOE',
    CONSUMER_GROUP => 'SWINGBENCH_GROUP',
    GRANT_OPTION => FALSE);
 
  DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA;
  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA;
END;
/
 
PL/SQL procedure successfully completed.

Testing the Resource Plan

With all the hard work done it is about time to test and see if the resource manager actually provides a benefit. Unfortunately there was no suitable load generator available to fully exploit the power of the lab server so a slightly different method had to be chosen to saturate the CPU. The lab server has 24 cores and 32 GB RAM available to itself, therefore the load generated has to be substantial. A little script, to be executed on the database server itself helps to achieve this goal:

$ cat burn.sh
#!/bin/bash
 
# check if all parameters have been passed
if [[ -z "$1" || -z "$2" ]] ; then
        echo "usage burn.sh <pdb> <num_sessions>"
        exit 99
else
        PDB=$1
        SESSIONS=$2
fi
 
# ramp up some workload
i=1
for i in $(seq 1 $SESSIONS) ; do
        ( sqlplus soe/soe@localhost/${PDB} @burn > /dev/null 2>&1 ) &
done
 
# get the start time
STARTTIME=$SECONDS
 
# wait for the background processes to finish
wait
 
# calculate how long it took
((DURATION=$SECONDS-$STARTTIME))
 
# and tell us!
echo "elapsed time in seconds on ${PDB}: ${DURATION}"

All it really does is to launch an onslaught of calculations of the square root of an integer:

declare
        v number;
begin
        for i in 1..1000000000 loop
                v := sqrt(v);
        end loop;
end;
/

Remember that our resource plan was configured for CPU shares! The I/O portion can safely be ignored. You can see the system was suitably under pressure as this output from “top” shows:

top - 16:27:31 up 26 min,  7 users,  load average: 26.54, 11.49, 5.69
Tasks: 452 total,  25 running, 427 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.7%us,  0.0%sy,  0.0%ni,  4.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 94.7%us,  0.0%sy,  0.0%ni,  5.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 93.7%us,  0.0%sy,  0.0%ni,  6.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
[...]
Cpu22 : 95.4%us,  0.0%sy,  0.0%ni,  4.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu23 : 87.4%us,  0.0%sy,  0.0%ni, 12.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  32994920k total, 26970360k used,  6024560k free,    24028k buffers
Swap:   524284k total,        0k used,   524284k free,   185932k cached

Now to answer the question about resource manager: yes it works. Starting three parallel sessions on the database server resulted in these execution times:

  • Elapsed time in seconds on swingbench1: 188
  • Elapsed time in seconds on swingbench2: 265
  • Elapsed time in seconds on swingbench3: 266

Instance Caging

Instance caging has been introduced with Oracle 11g Release 2. It addresses a scenario where multiple databases run on the same physical hardware, and might starve each other out of CPU resources. After you have seen that Database Resource Manager is great to work within the same database (CDB or non-CDB), it does not allow you to manage inter-database resources. Instance Caging to the rescue: in a nutshell, instance caging allows administrators to limit the number of CPUs available to the database instance by setting the initialization parameter cpu_count. In addition, a resource manager plan needs to be active for this feature to work in cases where resource usage is further defined. As you saw in the previous section using a resource plan is very useful anyway, and not at all a high price to pay for the instance caging feature.

To enable instance caging first ensure there is an active resource plan. You could for example query the V$RSRC_PLAN view to find out which resource plan is in use, or alternatively by checking the V$PARAMETER view. When you have made sure that there is a resource manager plan in place, all you need to do is to set the cpu_count parameter. This parameter is dynamic and can easily be changed while the database is up and running. Since cpu_count is the basis for many other derived parameters one should make adjustments carefully. It is best not to change the parameter too drastically. You can either over- or undersubscribe your database server. Oversubscribing in this context means that the sum of all database instances’ cpu_count exceeds the value of available CPUs as seen by the operating system. Alternatively you could divide the number of available CPUs and allocate them to the individual databases.

Summary

Pluggable Databases are certainly the most exciting feature of Oracle 12.1. Designers of consolidation platforms have ample use cases to directly embrace the new technology. The ability to create a new namespace in the form of the Pluggable Database allows you to maintain multiple users with the same name in the same Container Database. The same applies for roles of course, and the user concept allows you to have common users with fine-grained access privileges, without having to be granted a “create session” privilege. This should allow for a smooth migration of most enterprise monitoring software.

The many ways to create Pluggable Databases give the architects and designers a lot of flexibility with how they want to provision the golden copy of the Pluggable Database. It can either be NFS exported, ready for cloning. The source file name conversion clause makes it possible.

Another advantage that should not go unmentioned (again) is the fact that the unplug/plug operation can allow the DBAs to upgrade databases very quickly. And even if the databases stay in place, you only patch one database for hundreds potentially. Now that the foundation for the Pluggable Databases has been laid there is more to come. The following chapters will deal with protecting your database from disaster using Data Guard. But the local database has to be protected as well. Even though not really popular, database backups have to be performed regularly. And not only that, you should also regularly restore a random backup to ensure the backups are not corrupted, and can be restored in the worst case. Finally, there will be a chapter dealing with the migration to Oracle 12.1.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset