CHAPTER 2

image

Oracle 12c New Features

In this chapter you can learn more about some the new features of Oracle 12c in general, but with special emphasis to database consolidation, the main theme of this book. Some new features will not be covered in this chapter but in other chapters instead to give them the space they need. This is especially true for one of the most interesting new features introduced in 12c: Pluggable Databases. Other interesting new features can be found Chapters 9 and 10 on disaster recovery, and in Chapter 11 on backup and recovery. Changes in the migration process can be found in Chapter 12. To make it easier to use this chapter as a reference, new features are grouped into their respective scope:

  • Interesting changes for developers
  • Changes relevant to the database availability
  • Enhanced support for Information Life Cycle Management
  • Other infrastructure changes

The chapter will begin with the noteworthy changes for Oracle database developers.

Changes for Developers

As with every release there are interesting new features for developers. There are too many new features to cover in this space so I will mention only the most interesting ones. Most of these new features should directly be related to consolidating Oracle applications. Some features however were so interesting that they deserved to be mentioned. If you want to see all of the new features in all their glory you need to refer to the official Oracle New Features Guide, which lists them all briefly and refers to the relevant guides.

Security is rapidly becoming a key feature in the database and securing the environment is very often left to the database administrator. Security however begins with the application, and the developers should be the first ones to implement the provided interfaces in their code. Security is therefore listed here as a developer feature, not as an administrator feature.

The Quest for Security and Least Privilege

Unfortunately security is an all-too-often neglected topic, not only in consolidation but elsewhere as well. Several exploits of security vulnerabilities have reached the evening news, and it appears as if the bad guys are always a step ahead. Knowing about the dangers of the Internet age is one thing, securing the database environment is another. Using the least-privilege approach as opposed to granting a schema owner the DBA role is what should be done moving forward. Many applications though started their life on Oracle 7 or 8 in what the author likes to call the age of innocence. One could use FTP and the telnet protocol on a day-to-day basis and clear-text authentication was the standard. And thus the application schema owner was granted the DBA role out of the same laziness instead of granting only the required privileges. Changing such an application will be difficult but is well worth the time.

Although some problems are addressed with the introduction of the self-contained Pluggable Databases, security was and is very relevant. Imagine what would happen if someone broke out of this environment and had access to a much larger number of databases and their data! Thankfully Oracle has done a lot to limit damage that can be done. A hardened audit administration, the ability to restrict privilege inheritance for procedures with invoker rights, and many more are aimed at reducing the possibility of an exploit.

Roles and PL/SQL Units

The security model in Oracle 12c knows of two different sets of privileges. Either a procedure was invoked with all the rights and privileges of the definer or alternatively with the rights of the invoker of the procedure. The documentation stack refers to the model as “definer” vs. “invoker” rights. There are two significant changes to the invoker rights model: the first one is explained in this section and deals with the ability to grant roles to a PL/SQL unit. The second one will be detailed in the next section and allows you to restrict the rights of the invoker.

The Oracle SQL language now allows you to grant roles to PL/SQL units, that is, to a stored procedure, function, or package. Note the absence of a trigger here! This allows the developer to create an invoker rights procedure in the schema of the application owner which should be well protected by means of a secure password and a strict non-interactive use policy. By granting the necessary rights to the role necessary for the execution on the stored PL/SQL unit it is possible to create low-privileged accounts accessing the application. More importantly, there are no direct grants to the underlying tables within the application owner’s schema.

Figure 2-1 demonstrates how the combined rights of the invoker and the role allow for the execution of the PL/SQL unit.

9781430244288_Fig02-01.jpg

Figure 2-1. Access privileges granted to a user and a role allow for the execution of an invoker rights procedure

Consider the following simplified example for a demonstration of the concept.

  • An application owner, named APPOWNER owns the tables for the application.
  • A user account, named LOWPRIVS accesses the application via a well-defined API.

The LOWPRIVS account has merely been granted the right to connect to the database. The API is owned by the APPOWNER schema. The following procedure is part of the API and allows any application owner to enter data into the system. Crucially the LOWPRIVS account does not have any grants to objects in the application owner’s schema.

SQL> CREATE OR REPLACE PROCEDURE appowner.insert_critical_data(
  2    pi_id appowner.critical_table.id%type,
  3    pi_secret appowner.critical_table.secretData%type)
  4  authid current_user
  5  as
  6  begin
  7    insert into appowner.critical_table (
  8       id, secretData)
  9    values (
 10      pi_id,
 11      pi_secret);
 12* end;
SQL>

Notice that the procedure is an invoker rights procedure. The next step is to create a role that allows the execution of this stored procedure. In order to insert into a table the select and insert privileges are needed.

SQL> create role critical_table_role;
 
Role created.
 
SQL> grant insert on appowner.critical_table to critical_table_role;
 
Grant succeeded.
 
SQL> grant select on appowner.critical_table to critical_table_role;
 
Grant succeeded.

With the grants sorted the role can be granted to the procedure as shown:

SQL> grant critical_table_role to procedure insert_critical_data;
 
Grant succeeded.

With the execute privilege on the stored procedure granted to the LOWPRIVS user either directly or better via another role, the low privileged user can make use of the API:

SQL> exec appowner.insert_critical_data(1, 'test')
 
PL/SQL procedure successfully completed.
 
SQL> commit;
 
Commit complete.

As soon as the role grant is revoked from the stored procedure, its invocation fails:

SQL> exec appowner.insert_critical_data(2, 'test 2')
BEGIN appowner.insert_critical_data(2, 'test '), END;
 
*
ERROR at line 1:
ORA-00942: table or view does not exist
ORA-06512: at "APPOWNER.INSERT_CRITICAL_DATA", line 7
ORA-06512: at line 1

The new way of granting roles to PL/SQL units allows for much finer granularity in the implementation of the API. Whereas in 11g and before it was necessary to either grant privileges on the application schema directly to users to allow them to execute code or alternatively use definer rights procedures. Security auditors might find that definer rights procedures are dangerous since many application schemas have more privileges granted to them than needed. Using roles allows the developer to only grant the privileges needed to stored code and nothing more.

Inheritance in Invoker’s Rights Procedures

The second change to the invoker rights security model when calling PL/SQL units was already alluded to in the previous section. The change allows you to prevent exploits in situations where a user with high privileges executes a procedure with invoker rights. The invoker’s set of privileges can potentially be higher than what is needed.

In normal circumstances executing stored code using the invoker’s privileges does not need to be a problem, there is trust in the developers not to add malicious code into the system but stricter security rules require more control. Imagine for a second that a developer replaces the stored procedure harmless() with something malicious, and therefore highjacks the higher-privileged account which performs an undesired operation. Consider this scenario:

  • According to the security guidelines application, programming interface is grouped into a schema with low privileges. In this scenario the schema is named LOWPRIVS, and it contains—among other code—a procedure named harmless to perform a harmless operation.
  • All application data is defined and stored in its own schema, named APPOWNER in the example.
  • Sometimes an account with higher privileges than those granted to LOWPRIVS needs to execute the procedure harmless() as part of a different workflow. The account-HIGHPRIVS-is granted the DBA role as an extreme example.

The procedure harmless() is used to insert data into a table. Following is the definition:

CREATE OR REPLACE PROCEDURE lowprivs.harmless(
  pi_id appowner.critical_table.id%type,
  pi_secret appowner.critical_table.secretData%type)
authid current_user
as
begin
  insert into appowner.critical_table (
    id, secretData)
  values (
    pi_id,
    pi_secret);
 
end;
/

The LOWPRIVS account has a direct grant to select and insert into the table APPOWNER.critical_table. Now if LOWPRIVS executes the procedure everything is fine. However if someone changes the code to perform some undesired operation things can go wrong. For example:

SQL> CREATE OR REPLACE PROCEDURE lowprivs.harmless(
  2    pi_id appowner.critical_table.id%type,
  3    pi_secret appowner.critical_table.secretData%type)
  4  authid current_user
  5  as
  6  begin
  7    execute immediate 'truncate table appowner.critical_table';
  8  end;
  9  /
Procedure created.

Thanks to the execute immediate statement privilege checks are deferred to run time and the procedure compiles normally. If LOWPRIVS executes the procedure nothing happens; the malicious developer has to wait for someone with higher privileges to execute the code.

LOWPRIVS> exec lowprivs.harmless(3, 'evil lowprivs')
BEGIN lowprivs.harmless(3, 'evil lowprivs'), END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "LOWPRIVS.HARMLESS", line 7
ORA-06512: at line 1

Disaster will strike when the higher privileged user—HIGHPRIVS—executes the procedure!

HIGHPRIVS> exec lowprivs.harmless (4, 'test highprivs #2')
 
PL/SQL procedure successfully completed.
 
APPOWNER> select count(1) from appowner.critical_table;
 
  COUNT(1)
----------
         0
 
APPOWNER>

This is definitely not what you want. To prevent such code from performing undesired operations by exploiting the higher privileges of the invoking user Oracle designed an additional security layer around invoker rights: inheritance of privileges. In order for an invoker with higher privileges to grant those to the called procedure the caller must have granted the “inherit privileges” system privilege to the owner of the stored code. Alternatively the “inherit any privileges” system privilege must be granted to the owner of the invoker rights code. If the owner of the stored code has neither of these two the runtime engine will raise an ORA-06598 "insufficient INHERIT PRIVILEGES privilege" error.

It appears that “inherit privileges” is granted to public in order not to break any existing applications. Post-migration you might want to review these grants and perhaps remove the grant from public:

SYS AS SYSDBA> revoke inherit privileges on user highprivs from public;
 
Revoke succeeded.

Subsequent calls to the procedure will cause the error mentioned above:

HIGHPRIVS> exec lowprivs.harmless(2, user)
BEGIN lowprivs.harmless(2, user); END;
 
*
ERROR at line 1:
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at "LOWPRIVS.HARMLESS", line 1
ORA-06512: at line 1

White Lists to Restrict PL/SQL Program Unit References

Oracle 12c introduces a finer level of granularity for calls to PL/SQL stored units, named white lists. This allows the developer to create stored code which can only be invoked by other PL/SQL units which are on the calling unit’s white list. Imagine a procedure that performs debugging operations. To prevent the procedure from being invoked outside the context of your code you can add the new accessible by clause.

Before Oracle 12c it was really difficult to implement a list of calls that was permissible from within an application. At best a list of privileges granted to roles and/or multiple users was necessary to facilitate more fine-grained access.

Consider for example an Application Programming Interface (API) which defines access to the tables of an application. The API could be defined in a way similar to the Java interface, providing just a stub that must be implemented elsewhere. The implementation can then be “private” to the application, in other words cannot be invoked outside the context of the calling API.

To demonstrate how easy it is consider a function which helps write debugging information into a log table.

SQL> create or replace procedure debug_proc( pi_message varchar2 )
  2  accessible by (input_api_interface_pkg)
  3  as
  4  pragma autonomous_transaction;
  5  begin
  6    insert into debug_tab (
  7          t, vc)
  8    values (
  9          systimestamp,
 10          pi_message
 11    );
 12  commit;
 13  end;
 14  /
 
Procedure created.

The important new information is in line 2: the new accessible-by clause defines that only stored units within the package input_api_interface_pkg can invoke the procedure debug_proc. Failing to do so will cause a runtime error:

SQL> exec debug_proc('test')
BEGIN debug_proc('test'), END;
 
      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00904: insufficient privilege to access object DEBUG_PROC
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

This is quite clear: you do not have the rights to invoke debug_proc from outside the package. You can of course reference more than one possible invoker in the accessible by clause separated by comma.

SQL> create or replace procedure debug_proc
  2   accessible by (f_1,p_2,pkg_3,pkg_4)
  3  as
  4  begin
  5    dbms_output.put_line('invoker() has been called'),
  6  end;
  7  /

The stored units referenced in the accessible by clause do not need to exist at the time of the object’s creation. You cannot however define white lists within packages. Trying to do so will result in an error:

SQL> create or replace package pkg_3 as
  2    procedure pkg_3_p1 accessible by (pkg_4);
  3  end;
  4  /
 
Warning: Package created with compilation errors.
 
SQL> show err
Errors for PACKAGE PKG_3:
 
LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0      PLS-00157: Only schema-level programs allow ACCESSIBLE BY

The error message actually indicates that this works in both directions: you cannot reference individual procedures or functions within a package either. If you try to reference a specific procedure in the package in another stored unit of code you are going to receive an error:

SQL> create or replace function ANSWER return number
  2   accessible by (pkg_5.pkg5_p1)
  3  as
  4  begin
  5   return 42;
  6  end;
  7 /
 
Function created.

Notice how procedure pkg5_p1 of package pkg_5 is (theoretically) allowed to execute the function. In reality, it is not (the package header is deliberately not shown).

SQL> CREATE OR REPLACE PACKAGE BODY PKG_5 as
  2    procedure pkg5_p1 (pi_vc varchar2) as
  3      the_answer number;
  4    begin
  5      the_answer := answer;
  6    end pkg5_p1;
  7    procedure pkg5_p2 (pi_n number) as begin null; end;
  8    procedure pkg5_p3 as begin null; end;
  9  end;
 10  /
 
Warning: Package Body created with compilation errors.
 
SQL> show err
Errors for PACKAGE BODY PKG_5:

LINE/COL ERROR
-------- -----------------------------------------------------------------
5/5      PL/SQL: Statement ignored
5/19     PLS-00904: insufficient privilege to access object ANSWER
SQL>

So if you want to restrict access to a function to a specific piece of code you need to create this independently and not within a package.

Other Improvements

There are many more improvements for developing Oracle database applications in the new release. Although they are not directly related to consolidation they are noteworthy features . There is no particular order to these features, the one thing they have in common is their usefulness.

As with every new feature it is just this one that makes you long for a migration to the current release. But this has always been the case. The current problem one is working on is always solved, or at least less of an issue, with a future version of the database you cannot currently use.

Sequence Values as Default Values in Tables

One of the most dearly missing features from the author’s point of view was the possibility to define the next value of a sequence as the default value for a table. Quite often a developer finds himself in the situation that it was necessary to populate an artificial key with a sequence value. A classic example is the use of an Oracle replication technology. Some of Oracle’s replication technologies require you to add a unique key constraint where there was none before.

When running prior versions, the developer is faced with a number of problems. Obviously, it is not possible to insert a null value into the table as the application’s code currently would. Not many applications have a PL/SQL API to populate values in tables. If that was the case the addition of the new column would not cause any issues because you could simply modify the API. For the more realistic case where a multitude of interfaces exist to populate table data-external applications, SQL*Loader, a user interface written in APEX or similar you cannot guarantee that the key will always be populated. A common solution to this kind of problem is the use of a before insert trigger. This way you can guarantee that there will always be a key value! A trigger however always means overhead. Consider this simple example: a table that requires a new key to be added using an ID column.

SQL> > desc t1
 Name                                 Null?    Type
 ------------------------------------ -------- -------------------------
 VC                                            VARCHAR2(100)
 T                                             TIMESTAMP(6)
 
SQL> alter table t1 add id number;
 
Table altered.
 
SQL> alter table t1 add constraint
  2  pk_t1 primary key(id);
 
Table altered.
 
SQL> create sequence s_t1 cache 1000;
 
Sequence created.
 
SQL> create or replace trigger bit_t1 before insert on t1
  2  for each row
  3  begin
  4    if :new.id is null then
  5      :new.id := s_t1.nextval;
  6    end if;
  7  end;
  8  /
 
Trigger created.

With the trigger in place the application is not broken and the addition of the primary key has been added completely transparently. However, there is a downside: performance will take a hit. Especially if you perform this operation for many tables! Inserting a mere 100,000 rows into the table with the trigger enabled takes 8 seconds on a test system.

SQL> begin
  2  for i in 1..100000 loop
  3    insert into t1 (vc, t) values (i,systimestamp);
  4  end loop;
  5  end;
  6  /
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:08.00

Beginning with Oracle 12c it is possible to add a sequence as the default value for a column. The code example shown earlier changes to this:

SQL> alter table t1 modify id default s_t1.nextval;
 
Table altered.

The before insert trigger is disabled next. The same code block to insert 100,000 rows now takes a lot less time:

SQL> begin
  2  for i in 1..100000 loop
  3    insert into t1 (vc, t) values (i,systimestamp);
  4  end loop;
  5  end;
  6  /
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:04.26

So not only does the code look cleaner, but it also executes a lot faster and there is a lot less maintenance to be done. You can see the new default value in the user_tab_columns view for example:

SQL> select column_name,data_default
  2   from user_tab_columns
  3  where table_name = 'T1'
  4    and column_name = 'ID';
 
COLUMN_NAME                    DATA_DEFAULT
------------------------------ ------------------------------
ID                             "MARTIN"."S_T1"."NEXTVAL"

Increased varchar2 Limit

An interesting change has been made to the maximum length of the familiar varchar2 and some related data types. Instead of the previous limit of 4000 bytes for this field it is now possible to store up to 32 kilobytes. The initialization parameter in charge of changing the behavior is max_string_size.  It is a static parameter, and changing it requires an instance restart. In addition, changing from the default standard string width to the extended string width requires starting the database or PDB in UPGRADE mode and executing the script $ORACLE_HOME/rdbms/admin/utl32k.sql.

The easiest way to change to 32k varchar2 fields is to apply the change in a PDB. The initialization parameter in the PDB is independent of the value in the root, so you can make the change—maybe on a clone of the PDB—and test the changed behavior. Here is an example of how the parameter has been changed in a Pluggable Database, PDB1:

[oracle@server1 ∼]$ sqlplus sys/***@server1/pdb1 as sysdba
 
[...]
 
SQL> alter pluggable database PDB1 close immediate;
 
Pluggable database altered.
 
SQL> alter pluggable database pdb1 open upgrade;
 
Pluggable database altered.
 
SQL> show parameter max_string_size
 
NAME                                  TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                       string      EXTENDED
 
SQL> @?/rdbms/admin/utl32k

After the script terminates you should restart the PDB in NORMAL mode before you can make use of the extended length as this example shows:

SQL> create table t2 (vc varchar2(32767));
 
Table created.
 
SQL> desc t2
 Name                                 Null?    Type
 ------------------------------------ -------- -------------------------
 VC                                            VARCHAR2(32767)

Be warned though that you need a different tool other than SQL*Plus to insert that much data into the vc column. The inherent limit of SQL*Plus is 2500 characters for a column.

Enhanced Support for Top-N Queries

Before Oracle 12c one had to perform a few tricks to get the top-n rows of a query, or to perform pagination of a result set. It has been said many times that users entering a query are unlikely to move beyond the first few pages of results. Fetching the complete result set of a query for a user interface usually does not yield much benefit. Oracle’s own SQLDeveloper for example fetches a limited subset of the query result set in the grid first. Only if the users scroll further down in the result set are more records fetched. This approach keeps resource consumption low and results in a better user experience.

Before explaining how the new support can benefit your application, let’s briefly review how applications are currently written in order to paginate a result set, such as displaying a search result in a web page. The following example shows some code slightly adapted from an excellent example Tom Kyte gave in 2006. This example shows objects owned by the user MARTIN. The objects are listed five at a time.

SQL> select *
  2    from ( select /*+ FIRST_ROWS(n) */
  3    a.*, ROWNUM rnum
  4        from ( select object_name, object_type, subobject_name from t3
  5                where owner = 'MARTIN'
  6                order by object_name, subobject_name ) a
  7        where ROWNUM <= :last_row_in_set )
  8  where rnum  >= :first_row_in_set
  9 /

The bind variables here constitute the clever bit of the query. In the initial query execution first_row_in_set is set to 1 while last_row_in_set is set to 5. The bind variables are then changed with every execution of the query allowing for a rolling window.

The complete result set for the user MARTIN is important for the rest of this section. Querying table T3 for all objects owned by MARTIN, and ordering the results by object_name and subobject_name, the following result is displayed:

SQL select object_name,object_type,subobject_name
  2  from t3 where owner = 'MARTIN'
  3  order by object_name,subobject_name;
 
OBJECT_NAME          OBJECT_TYPE   SUBOBJECT_NAME
-------------------- -------------------- --------------------
ANSWER               FUNCTION
DEBUG_PROC           PROCEDURE
DEBUG_TAB            TABLE
I_PARTTAB            INDEX PARTITION       P1
I_PARTTAB            INDEX PARTITION       P2
I_PARTTAB            INDEX PARTITION       P3
I_PARTTAB            INDEX PARTITION       PMAX
I_PARTTAB            INDEX
PARTTAB              TABLE PARTITION       P1
PARTTAB              TABLE PARTITION       P2
PARTTAB              TABLE PARTITION       P3
PARTTAB              TABLE PARTITION       P4
PARTTAB              TABLE PARTITION       P5
PARTTAB              TABLE PARTITION       PMAX
PARTTAB              TABLE
PKG_3                PACKAGE
PKG_5                PACKAGE
PKG_5                PACKAGE BODY
PK_T1                INDEX
S                    SEQUENCE
SEQTEST              TABLE
SYS_C009851          INDEX
S_T1                 SEQUENCE
T1                   TABLE
T3                   TABLE
 
25 rows selected.

Now, getting back to the pagination query, it indeed returns the first 5 rows:

OBJECT_NAME          OBJECT_TYPE           SUBOBJECT_NAME             RNUM
-------------------- -------------------- -------------------- -----------
ANSWER               FUNCTION                                            1
DEBUG_PROC           PROCEDURE                                           2
DEBUG_TAB            TABLE                                               3
I_PARTTAB            INDEX PARTITION       P1                            4
I_PARTTAB            INDEX PARTITION       P2                            5

The next 5 records where then fetched by changing the where clause in lines 6 and 7 of the query. This whole query can now be greatly simplified thanks to the new “row limiting clause” in the select statement. Two options exist: using an offset and either a number of rows or a percentage of the result set can be returned. Returning to the above example, the offset is useful when a certain number of rows in the result set have to be skipped. Skipping the first five rows in the result set is achieved using this query:

SQL> select object_name, object_type, subobject_name from t3
  2  where owner = 'MARTIN' order by object_name, subobject_name
  3  offset 5 rows;
 
OBJECT_NAME          OBJECT_TYPE   SUBOBJECT_NAME
-------------------- -------------------- --------------------
I_PARTTAB            INDEX PARTITION       P3
I_PARTTAB            INDEX PARTITION       PMAX
I_PARTTAB            INDEX
PARTTAB              TABLE PARTITION       P1
[...]
T1                   TABLE
T3                   TABLE
 
20 rows selected.

As you would expect, the first 5 objects are not part of the result set. The query however does not limit the result set. Keep the execution plan of this query in mind, which is as follows:

SQL> select * from table(dbms_xplan.display_cursor);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  fyrpmgskxx073, child number 0
-------------------------------------
 select object_name, object_type, subobject_name from t3  where owner =
'MARTIN' order by object_name, subobject_name  offset 5 rows
 
Plan hash value: 3729804300
 
-----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time      |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |        |       |   425 (100)|          |
|*  1 |  VIEW               |      |     25 |  7250 |   425   (1)| 00:00:01 |
|   2 |   WINDOW SORT       |      |     25 |  1050 |   425   (1)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| T3   |     25 |  1050 |   425   (1)| 00:00:01 |
-----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber">5)
   3 - filter("OWNER"='MARTIN')
 
22 rows selected.

Pagination of the result set is achieved by adding the limit clause to the query. In order to display rows 6 through 10 in the result set, the query can be changed as follows:

select /*+ gather_plan_statistics */
  object_name, object_type, subobject_name from t3
  where owner = 'MARTIN' order by object_name, subobject_name
  offset :a rows
  fetch next :a rows only;
 
OBJECT_NAME          OBJECT_TYPE   SUBOBJECT_NAME
-------------------- -------------------- --------------------
I_PARTTAB            INDEX PARTITION       P3
I_PARTTAB            INDEX PARTITION       PMAX
I_PARTTAB            INDEX
PARTTAB              TABLE PARTITION       P1
PARTTAB              TABLE PARTITION       P2

Pagination is working nicely here. However, the explain plan becomes slightly more difficult to read:

SQL> select * from table(dbms_xplan.display_cursor(format=>'TYPICAL'));
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
SQL_ID  czwjx2jkxcj2t, child number 0
-------------------------------------
 select /*+ gather_plan_statistics */    object_name, object_type,
subobject_name from t3    where owner = 'MARTIN' order by object_name,
subobject_name    offset :a rows   fetch next :a rows only
 
Plan hash value: 171896480
 
----------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |       |       |   425 (100)|          |
|*  1 |  VIEW                     |      |    25 |  7250 |   425   (1)| 00:00:01 |
|*  2 |   WINDOW SORT PUSHED RANK |      |    25 |  1050 |   425   (1)| 00:00:01 |
|*  3 |    FILTER                 |      |       |       |            |          |
|*  4 |     TABLE ACCESS FULL     | T3   |    25 |  1050 |   425   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=CASE  WHEN
              (:A>=0) THEN FLOOR(TO_NUMBER(TO_CHAR(:A))) ELSE 0 END +:A AND
              "from$_subquery$_002"."rowlimit_$$_rownumber">:A))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY
              "OBJECT_NAME","SUBOBJECT_NAME")<=CASE  WHEN (:A>=0) THEN
              FLOOR(TO_NUMBER(TO_CHAR(:A))) ELSE 0 END +:A)
   3 - filter(:A<CASE  WHEN (:A>=0) THEN FLOOR(TO_NUMBER(TO_CHAR(:A)))
              ELSE 0 END +:A)
   4 - filter("OWNER"='MARTIN')
 
31 rows selected.

The approach well since the offset thankfully accepts a value of 0 rows as well. Your web frontend could simply keep track of the number of records already displayed and passes the bind variables :first_row_to_fetch as well as a constant for the :number_of_rows to fetch.

Changes Relevant to Database Availability

Oracle has always been one at the forefront when it came to supporting a highly available infrastructure for its database product. Broadly speaking you can categorize the high availability options into two categories. The first is all about protecting a single instance database. A database instance in Oracle’s terminology broadly speaking is what you find in memory on a database server or within a virtual machine. An instance is gone as soon as the power is turned off, as it does not have a persistent part. The persistent information, or in other words the database, is separate from the instance. A single instance database has a one-to-one mapping with the database, and is most likely the predominant deployment method of an Oracle database. It is also by far the easiest to manage. A multi-instance database is called a Real Application Clusters (RAC) Database. In RAC the strict mapping of instance to database is lifted, and you can have n-to-1 mappings between instances and databases. Bear in mind that with RAC you always get only one copy of the database; it therefore does not qualify as a disaster recovery (DR) solution!

Among the options for protecting your single instance database you can choose from are:

  • Virtualization
  • Active/Passive Clusters

The loss of the instance even for the briefest of moments always means a loss of connectivity to the database. This connectivity loss is going to be noticed by the user. Depending on how well your application has been written the end-users will either see the ORA-03113 “end of file on communication channel” or a friendlier variant of the same. Loss of the instance can be completely acceptable, for as long as the service can be restored in a reasonable recovery period. If all goes well then all there is to do is for the SMON process to perform instance recovery. Worse cases involve restarting the middle-tier since it was never written to deal with loss of database connectivity in mind. If such an interruption is not possible then you need to become more resourceful and invest in the previously mentioned Real Application Cluster. The active-active configuration allows the application to continue despite the fact that an instance was lost although there might be a brief period where the system seems to “hang” due to resource remastering. Of course, RAC cannot prevent you from a brown-out in the data center: if all the servers go down so will your RAC database and you might have to invoke your disaster recovery scenario.

Since not every database administrator is familiar with RAC and the supporting technology the following sections will introduce the most important aspects you need to know before you can appreciate the 12c new features for RAC. Please note that virtualization and active/passive clusters will be explained in much more depth in Chapter 4.

Brief Introduction to Real Application Clusters

Oracle RAC is a cost option on top of Oracle Enterprise Edition. Using a shared-everything approach each database instance “sees” all the changes made by its peers. Operating the database on multiple instances offers resilience and more fault-tolerance compared to single instance deployments, if the applications are appropriately written and tested. RAC pays tribute to the fact that many enterprise deployments moved from what accountants consider expensive hardware to industry standard equipment. These enterprise deployments have often been on hardware that supported extensive Reliability, Availability and Serviceability (RAS) features built-in. Although hugely beneficial to the stability of the platform, these features come at a cost. Although RAS features exist in the high-end x86-64 platform as well to a degree, a server failure can still happen with the previously mentioned problems associated with it. The RAC value proposition includes:

  • High Availability: thanks to the multi-instance database a server crash can be mitigated by cleverly architected applications.
  • Scalability: a Real Application Cluster can be extended should the need for new processing power arise.
  • Manageability: Multiple databases can be consolidated into a Real Application Cluster.
  • Reduced Cost of Ownership: Combining Pluggable Databases and RAC can potentially reduce the number of database servers, cutting cost for electricity, cooling, and real estate in the data center.

From a technical point of view RAC is based on Oracle Grid Infrastructure. Grid Infrastructure combines Oracle’s cluster-aware logical volume manager, named Automatic Storage Management (ASM) with a high-availability framework-Oracle Clusterware. Oracle Clusterware serves one main purpose: to hide the fact that multiple physical machines are connected to form a cluster from the user. As such Clusterware performs all tasks you would expect from a cluster framework:

  • Cluster membership management: Every cluster framework must manage which hosts are considered healthy members of the cluster. Under no circumstances must the cluster find itself in a split-brain situation. In such a catastrophic state communication between cluster members is interrupted while the cluster nodes are still running. Adhering to its programming, the cluster software in each of the cluster halves will try to fail over resources of the nodes it considers crashed. The danger of data corruption looms here: if the application communicates with effectively unsynchronized cluster halves, different data can be written to disk. Oracle Clusterware uses a network and disk heartbeat to prevent a split brain scenario.
  • Fencing: If Clusterware detects that a node or set of nodes in the cluster is unresponsive then it will evict members from the cluster to prevent corruption. This process is called node fencing. Beginning with Oracle 11.2 fencing has been greatly improved using the Intelligent Platform Management Interface (IPMI) effectively allowing a cluster member to evict an unhealthy member using a hardware command.
  • Inter-node connectivity: RAC employs a shared-everything approach. A cluster database has one global SGA that spans all the nodes on which the database is up and running. For various reasons such as the transfer of database blocks over the network or the exchange of cluster-wide enqueue information database instances must communicate with one another.
  • Resource management: Everything Clusterware manages is referred to as a resource. The ASM instance, network, database, services—literally everything in a RAC environment is a resource. Each resource has an associated profile, stating on which node to execute the resource, how to monitor the resource’s health, and what to do if a resource has stopped working. Local resources by definition exist on one node only and cannot be failed over to another cluster node. Cluster resources however can be moved within the cluster.

Figure 2-2 shows a typical configuration for a Real Application Cluster consisting of two nodes. Note that the figure does not take any of the new 12c features into account. The management interface used for out-of-band management is not shown in order to simplify the diagram.

9781430244288_Fig02-02.jpg

Figure 2-2. Conceptual design of a RAC environment

You can see from Figure 2-2 that the RAC database is concurrently mounted by all instances. It is supported to use a third-party file system for this purpose. For quite some time now Oracle has tried to deliver all the required technology to run a clustered system, and its chosen technology—ASM—has matured and is more than adequate for the purpose.

Usually access to shared storage is provided by Fiber Channel Host Base Adapters; however, other possibilities exist as well. The inter-node communication mentioned earlier in this chapter is made possible by a dedicated, cluster-private network. Most sites employ either multiple aggregated Gbit Ethernet ports or a couple of 10 Gbit Ethernet ports for the purpose. More sophisticated deployments such as Exadata make use of Infiniband for the cluster interconnect.

Once the operating system is configured for RAC the database administrator begins the installation of Grid Infrastructure which is a mandatory component when installing RAC.

The final step is to install the database binaries over the Grid Infrastructure layer. The installer will detect that you are trying to deploy the database binaries on a cluster and offer you a list of nodes to deploy the binaries to.

Brief Introduction to Automatic Storage Management

Automatic Storage Management (ASM) is a feature that was introduced in Oracle 10g Release 1. Among the primary design goals is the intention to simplify storage administration by automating disk and file management tasks. As with many new features introduced with the Oracle database, another aim is to reduce management overhead and deployment costs. ASM is a generic alternative to a clustered file system (CFS)/cluster logical volume manager that works on all supported platforms. The implementation of ASM differs greatly from other file systems which you normally install on the operating system in form of kernel loadable modules and a user-mode part for the administration. ASM on the other hand is an additional lightweight Oracle instance on each server. ASM provides similar functionality to a classic CFS but includes volume management capabilities, such as disk balancing and redundancy.

With the introduction of Oracle 10.2, ASM has become a popular choice for storage management, especially for RAC. As a sign of its maturity, documentation and functionality have increased with each release since its inception, and while initially documentation consisted of a chapter in the Administrator’s Guide in 10g, it has now got a manual of its own.

ASM is built on Oracle Managed Files (OMF), a feature that was first introduced in Oracle 9i. OMF greatly simplifies the management of database files. Oracle ASM is by no means restricted to RAC as its position in this chapter suggests. ASM is a great choice if you are looking for standardization of the storage layer.

Conceptually, ASM resembles the classical logical volume manager (LVM) known from Linux and AIX, but it can do a lot more and—most importantly—it is cluster aware. Individual physical volumes, called ASM disks, are aggregated into volume groups. Instead of volume groups these entities are referred to as ASM disk groups. In a RAC environment, the ASM disks have to be located on shared storage accessible from all nodes in the cluster. Just like all other cluster resources, disk groups are managed internally via Clusterware. Unlike for example Linux LVM, there are no logical volumes created on top of a volume group. Instead, the disk group as a whole can be used to create and store database files. Following the logic of the Oracle Managed File, all you need to do is define the disk group on which your database should be created.

One of the striking beauties of Oracle ASM is the online-almost-everything approach. Adding and removing ASM disks from an ASM Disk Group is an online operation. After an addition or removal of a disk, a so-called rebalance operation ensues. Following a “Stripe-And-Mirror-Everything” (SAME) principle Oracle tries very hard to evenly distribute information on ASM disks. Figure 2-3 compares the classic LVM approach with ASM.

9781430244288_Fig02-03.jpg

Figure 2-3. Comparing a typical logical volume manager with Oracle ASM

Very often you find discussions around ASM to be of a political nature. With the deployment of ASM the database administrator enters the world of the storage or—depending on your company—the system administrator’s realm. Since they are most likely not aware of ASM or what it can do and how to use it, a lot of resistance will likely be faced. The factor of job security also plays an important role in the discussions. You read before that ASM is implemented as an Oracle instance which is radically different from the way the storage or operating system administrator worked before. Instead of entering commands on the shell prompt the storage administrator will have to log in to the ASM instance and perform the maintenance from there. The alternative way of administering ASM is by using Oracle Enterprise Manager. Either way, the implementation of ASM resembles a paradigm shift. Beginning with Oracle 11.1 however it is possible to separate the administration of the storage and cluster layer to a dedicated account on the operating system which is different from the oracle account. If needed the separation of duties that has worked so well for Oracle 9i can be carried over into the cloud age.

Enhancements in the Cluster Layer

Oracle Clusterware and Automatic Storage Management together make up the cluster layer in Oracle RAC. It includes the High Availability framework and the storage management. For quite a while now it has been possible to run Oracle RAC without any third-party Clusterware at all. For those users, who probably form the majority of Oracle’s RAC customers the following sections are of interest:  The limit of 63 ASM disk groups has been increased. This could be potentially interesting for consolidated environments where for some reason each database has its own disk group.

Oracle also allows you now to store the ASM password file in a shared ASM disk group. This new ability is not only useful, but also a prerequisite for using Flex ASM.

Checking for Logical Corruption in ASM disk groups

The alter diskgroup command has been extended to include the scrub clause. This clause allows the ASM administrator to check ASM diskgroups, individual disks in a disk group, or a single ASM files for logical corruption in cases where ASM is responsible for protecting the data. If logical corruption is detected during the scrubbing process ASM can try to repair it using mirror copies of the extent. By default ASM will only report corruption, you need to specify the repair keyword to let it try to fix it.

The final relevant parameter for this operation is the power level. Unlike some other ASM commands you do not specify an integer, but rather any one of low, high, and max.

If you suspect a problem with the diskgroup “RECO” of your system, and want to be reported of potential problems rather than fixing them on the spot, you could use the following command:

SQL> alter diskgroup reco scrub power high;
 
Diskgroup altered.

No further information was found to be returned to the prompt, but it is expected that a summary of errors will be returned if there are any. Just like the check diskgroup clause additional information is recorded in the ASM instances alert.log. In the above example, which found no problem the alert.log information contained these few lines:

2013-10-10 08:59:42.373000 +01:00
SQL> alter diskgroup reco scrub power high
NOTE: Start scrubbing diskgroup RECO
Starting background process SCRB
SCRB started with pid=34, OS id=7934
SUCCESS: alter diskgroup reco scrub power high

Scrubbing will be prevented if the I/O load of the ASM instance is currently considered too high. You can view the progress of the scrubbing operation in v$asm_operation as usual.

ASM Becomes More Flexible

Before Oracle 12c every node in the cluster needed to have its own ASM instance. This collection of ASM instances is generically referred to as an ASM cluster. Instances within the cluster communicated with each other with the goal to present shared storage to all users. Up until Oracle 11.2 this included databases only, but with the introduction of the general purpose file system based on ASM, named ASM Cluster File System or ACFS for short Oracle offers a complete stack of a cluster aware storage solution.

Up until 12c it has been the case that a failed ASM instance meant all the databases on the host were to fail as well. Oracle Flex ASM, as the new set of features has been named, addresses this situation by removing the strict requirement to have one ASM instance per cluster node. The default configuration of ASM up to version 12c is shown in Figure 2-4.

9781430244288_Fig02-04.jpg

Figure 2-4. The pre-12c ASM configuration with one ASM instance per cluster node

Flex ASM removes the strict requirement to have one, and only one ASM instance per cluster node. Instead, with Flex ASM enabled, Oracle will create three ASM instances in the cluster, regardless of cluster size. The number of ASM instances can of course be configured, most importantly when non 12c databases are supposed to be hosted on the 12c Grid Infrastructure deployment. In such situations you will need to operate Flex ASM in the same way ASM operated in the earlier release. Figure 2-5 shows a four-node RAC with Flex ASM enabled. All databases have already been migrated to 12c.

9781430244288_Fig02-05.jpg

Figure 2-5. Schematic display of a Flex ASM configuration in a four-node cluster, public and management network are not shown

Flex ASM can be configured from the start during the installation of Grid Infrastructure and is mandatory if you are planning on using a Flex Cluster (another new RAC 12c feature). Alternatively an existing configuration can be changed to Flex ASM using the ASM Configuration Assistant after the ASM spfile, password file, and the OCR and voting disks are all stored in a disk group within ASM.

ASM traffic can be sent over the private network but might collide with existing workloads. Thankfully there is an option to allow the ASM traffic to use a dedicated network, as shown in Figure 2-5. If a dedicated network is used, a new cluster resource of type ora.ASMNET1LSNR_ASM.lsnr will be created for the network. As you can imagine Oracle will create multiple listeners if you specify multiple networks to be of type “ASM”. It is visible using the crsctl utility as shown here:

[root@rac12node1 ∼]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12node1               STABLE
               ONLINE  ONLINE       rac12node2               STABLE
               ONLINE  ONLINE       rac12node3               STABLE

The idea is strikingly simple: should an ASM instance to which databases are connected fail, the database will dynamically reconnect to another ASM instance in the cluster. The following example demonstrates the new principle in a three-node test cluster with the default cardinality reduced to two:

[grid@rac12node1 ∼]$ srvctl modify asm -count 2
[grid@rac12node1 ∼]$ srvctl status asm
ASM is running on rac12node3,rac12node1

The database started on node 2 will now not have its own ASM instance. It dutifully reports the fact in the alert.log:

NOTE: ASMB registering with ASM instance as client 0xffffffffffffffff (reg:89854640)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10007)

And it is also shown in the v$asm_client view. Here is the example of the first ASM instance:

SQL> select dg.name,c.instance_name,c.db_name,c.status
  2  from v$asm_diskgroup dg, v$asm_client c
  3  where dg.group_number = c.group_number
  4 and c.db_name = 'RCDB1'
  5 /
 
NAME       INSTANCE_N DB_NAME  STATUS
---------- ---------- -------- ------------
DATA       RCDB12     RCDB1    CONNECTED
DATA       RCDB11     RCDB1    CONNECTED

If an ASM instance such as the instance on node 3 crashes, the client will migrate to the surviving instance for the time it takes to restart the other instance:

NAME       INSTANCE_N DB_NAME  STATUS
---------- ---------- -------- ------------
DATA       RCDB11     RCDB1    CONNECTED
DATA       RCDB13     RCDB1    CONNECTED
DATA       RCDB12     RCDB1    CONNECTED

After a little while you see the third database instance connect back to the second ASM instance on node 3.

To see if Flex ASM is configured you can use the showclustermode command for the ASM command line utility:

[grid@rac12node3 ∼]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

The configuration of your ASM cluster can be queried as well as shown next.

[root@rac12node1 usr]# srvctl config asm
ASM home: /u01/app/12.1.0.1/grid
Password file: +OCR/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

Flex ASM also supports ACFS operations, but these are out of the scope of this section.

image Note  You may have also read about Flex Clusters allowing the administrator to create a more loosely coupled RAC system. In my opinion this feature—although nice—will not be part of a Database as a Service platform.

Evaluating Commands Before Execution

An interesting new feature for the database as well as Cluster layer is the option to evaluate commands before they are executed. The lower-level crsctl command as well as the everyday application srvctl has a new option named -eval to give administrators more peace of mind.

The targets of your evaluation commands are a database instance, the cluster database, services, and server pools for policy managed databases. You can evaluate valid combinations of the target with the start, stop, remove, relocate, modify, and add commands.

The eval option works only for policy managed databases. If you try to use it with an administrator managed RAC database then you will get the following error:

[oracle@rac12node1 ∼]$ srvctl start database -db cdb1 -eval
PRKO-2712 : Administrator-managed database cdb1 is not supported with -eval option

If your cluster deployments feature policy managed databases then the command can potentially be very useful, especially when you are dealing with server pools since there can be interactions between the current and all the other server pools and their associated databases.

Non-RAC Options to Increase Availability

The previous sections have mainly dealt with cluster enhancements which allow better resilience to instance failure. But even with a multi-instance RAC system you can still incur outages when changing code in your application. For example, library cache lock on a login trigger can have devastating consequences. Edition-Based Redefinition (EBR) can alleviate many of these problems but EBR is a new feature that has not yet resonated the way it should have with developers.

Some of the enhancements you will read about below do not require the use of Editions. This should not prevent you from having a look at EBR because it is beautiful! Do you remember when an index could be rebuilt online for the first time? No more requirements for a change window to rebuild indexes! The list of online operations available to the database administrator has increased with Oracle 12c and you should know about them.

Moving a Partition without Blocking

Partitioning has seen quite a lot of improvement over the past releases. If you consider the new options allowing you to combine partitioning schemes, interval partitioning, and referencing then you can see why Partitioning is such an important option.

Oracle 12c has added many more enhancements, including the possibility to move a partition online. In versions before Oracle 12c you had to wait for a compatible lock to move a partition. Consider the following example. A simulated batch job is continuously inserting information into a partitioned table created as follows:

SQL>  create table part_demo
  2  partition by range (creation_date) (
  3  partition p_2013     values less than (to_date('01.01.2013','dd.mm.yyyy')),
  4  partition p_jan_2013 values less than (to_date('01.02.2013','dd.mm.yyyy')),
  5  partition p_feb_2013 values less than (to_date('01.03.2013','dd.mm.yyyy')),
  6  partition p_mar_2013 values less than (to_date('01.04.2013','dd.mm.yyyy')),
  7  partition p_apr_2013 values less than (to_date('01.05.2013','dd.mm.yyyy')),
  8  partition p_may_2013 values less than (to_date('01.06.2013','dd.mm.yyyy')),
  9  partition p_jun_2013 values less than (to_date('01.07.2013','dd.mm.yyyy')),
 10  partition p_jul_2013 values less than (to_date('01.08.2013','dd.mm.yyyy')),
 11  partition p_aug_2013 values less than (to_date('01.09.2013','dd.mm.yyyy')),
 12  partition p_sep_2013 values less than (to_date('01.10.2013','dd.mm.yyyy')),
 13  partition p_oct_2013 values less than (to_date('01.11.2013','dd.mm.yyyy')),
 14  partition p_nov_2013 values less than (to_date('01.12.2013','dd.mm.yyyy')),
 15  partition p_dec_2013 values less than (to_date('01.01.2014','dd.mm.yyyy')),
 16  partition p_def values less than (maxvalue))
 17  enable row movement
 18  as select sysdate+dbms_random.normal*100 as creation_date,
 19*  object_id, object_name from dba_objects
SQL> /
 
Table created.
 
SQL> exec dbms_stats.gather_table_stats(user,'PART_DEMO')

Referential integrity constraints are then defined on the table. Simulating the batch job, the following little snippet of code is executed.

SQL> begin
  2   for i in 1..10000 loop
  3    insert into part_demo (
  4     creation_date,
  5     object_name
  6    ) values (
  7     (to_date('02.05.2013','dd.mm.yyyy')+i/1440),
  8     'test ' || i
  9    );
 10    end loop;
 11* end;
SQL> /
 
PL/SQL procedure successfully completed.
 
SQL>

At the same time the DBA wants to move the August partition from fast storage to slow storage. Here is the outcome of that exercise in up to and including 11.2. It is also the default for 12c:

SQL> alter table part_demo move partition p_may_2013 tablespace tbs_arch_01;
alter table part_demo move partition p_may_2013 tablespace tbs_arch_01
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

The new syntax to perform the same operation online is shown here:

SQL> alter table part_demo move partition P_MAY_2013 tablespace tbs_arch_01 online;
 
Table altered.

The operation takes a little longer but it is an online operation! Only when all the locks on the partition are removed will the operation complete.

Moving Data Files While They Are Accessed

Moving data files from one location to another has most often been an operation that required downtime or otherwise impacted the availability of your application. Moving a database from one file system to another—perhaps during a SAN replacement—commonly required the database administrator to shut the database down, using operating system utilities to copy the data files to the new location, start the database into mount mode followed by a call to alter tablespace rename datafile ....

This task has now been greatly simplified by the introduction of the alter database move datafile command. Using the new command it is very simple to move data and temp files (but not online redo log files) from a file system into ASM while they are in use. In this context it is possible to move an Oracle Managed File (OMF) to another location managed as an Oracle Managed File, allowing you to move OMF files from a file system into ASM. Oracle will briefly create a copy of the data file so ensure that there is enough space for the copy and the original file.

After the move is completed, the old file will automatically be removed unless you specify the keep keyword. If your old file was not an Oracle Managed file you can specify that you want to keep the old file. Here is an example for moving a data file into ASM which has accidentally been created on the file system for tablespace tbs_arch_01:

SQL> select file_name from dba_data_files
  2  where tablespace_name = 'TBS_ARCH_01';
 
FILE_NAME
----------------------------------------------------------------------------
/u01/oradata/cdb1/tbs_arch_01.dbf

The file is now moved to the correct location:

SQL> alter database move datafile '/u01/oradata/cdb1/tbs_arch_01.dbf'
  2  to '+DATA';
 
Database altered.

After the move the data file is in the correct location. The database stayed available all the time while the data file has been moved. Notice that this was executed in a Pluggable Database. Chapter 7 explains file names in PDBs in more detail.

SQL> select file_name from dba_data_files
  2  where tablespace_name = 'TBS_ARCH_01';
 
FILE_NAME
--------------------------------------------------------------------------------
+DATA/CDB1/E025A74226115F1CE0436538A8C0FDF9/DATAFILE/tbs_arch_01.274.821486701

Notice that the move data file command creates a copy of the data file you are moving. In most cases this is not a problem, but if you are short on space you should not execute multiple commands in parallel.

Data and Information Lifecycle Management

Many vendors are spending a lot of time and effort in educating users on how important it is to use Information Life Cycle Management (ILM) on a grand scale to cut costs and become more efficient. The vendors’ value propositions are sound of course: there are many applications that do not use any kind of life cycle management at all, wasting premium storage at high cost for all data. Since a lot of data is related to time in one way or another it would make sense to put older—less frequently accessed data—on lower performance (read: cost) storage while keeping the other storage on higher storage tiers.

Storage vendors of course know about that problem, and try to address it with clever software on the storage array. Since most enterprise arrays have some sort of storage tiering built-in algorithms in software try to move the most frequently accessed data to the fastest storage tier while keeping “cold” data on slower storage. This happens automatically.

Storage Tiers

Storage engineers and vendors often refer to a storage tier when discussing persistent storage. The classic division of storage in an enterprise array consists of the following tiers, sorted by performance in descending order.

  • Flash memory
  • High performance Fibre Channel disks
  • High capacity Fibre Channel disks
  • High capacity “near-line” disks

The actual implementation is of course dependent on the array vendor but most implement a high IOPS/low latency technology for very demanding applications to very high capacity disks which are slow but cheap for data that the business cannot decide to archive differently.

Partitioning

Partitioning is one of the most important tools for Information Lifecycle Management (ILM) in the Oracle database as it allows the administrator and application developer to group data in a heap table into more manageable, smaller pieces. Implemented correctly the effect of using partitioning can be felt immediately, and is usually a very positive one. A lot of the data in transactional systems is time based in one way or another. Usually, the most recently inserted data is what the application needs to access most. In trading systems for example this would be the current hour, day, or week depending on the granularity and volume of data to store and process. Recent data is often accessed and modified, therefore having the highest demand for high quality storage. This low latency/high bandwidth storage comes at premium cost, which is why older data that has cooled off a little is moved to lower-cost storage. Depending on the age of the data and retention period mandated by the auditors this process could be repeated a number of times until the data is on tertiary storage, maybe even in a platform-neutral format for really long-term archiving.

Many developers have already made good use of the Oracle Partitioning option to partition large tables into range partitions stretching the before mentioned intervals. For administrators though moving partitions to lower tier storage required the implementation of the change during a maintenance window, where partitions for a table have been moved (and optionally compressed) to a different storage tier.

Automatic Data Optimization in Oracle 12c

Automatic Data Optimization (ADO) is based on access tracking on different segments of the database (the feature requires a separate license). A segment in this context can be a table, or a (sub)partition.

image Note  ADO is not supported for pluggable databases. But ADO is so incredibly useful that you should know about it anyway.

The alter table and create table statements have been enhanced with new syntax allowing you to define ILM policies once the tracking has been enabled. The initialization parameter heat_map controls the access tracking and it has to be set system wide. Once the heat map tracking is enabled information about segment access is recorded in the dictionary. Live tracking information is found in v$heat_map_segment. That view provides information about reads and writes as well about as full- and lookup scans. A number of DBA%-views exist as well which can be queried for access to segments.

One common use case of data lifecycle management is to compress “cold” data. The next example does exactly this. The implemented strategy is as follows:

  1. Access tracking is globally enabled by setting heat_map to “on.”
  2. A new ILM policy is attached to the table, stating that the cold data should be compressed.

Thinking about the technology ADO makes the most sense with range-partitioned data based on a date column in the context of Information Life Cycle Management. Hopefully the application has been designed with that in mind allowing the optimizer to perform runtime partition pruning. Partitioning for performance and manageability sometimes are conflicting goals.

To demonstrate the new lifecycle management technology the following demo has been created. A table containing 100,000 is range partitioned on a date.

create table t1 (
  d    date,
  vc1  varchar2(100),
  vc2  varchar2(100),
  n1   number
)
partition by range (d)
interval (NUMTODSINTERVAL(7, 'day'))
(
  partition p_start values less than (to_date('01.01.2008','dd.mm.yyyy'))
);

The interval portioning algorithm created 53 partitions in addition to the initial one. The table is populated with random dates and some padding for a year in the future. A small PL/SQL procedure executed every second by ten scheduler jobs should bring the table to the attention of the access tracking algorithm:

CREATE PROCEDURE p
IS
  CURSOR c
  IS
    SELECT d, vc1
    FROM t1
    WHERE d = TRUNC(to_date('23.07.2014', 'dd.mm.yyyy') - DBMS_RANDOM.value(0,128));
  TYPE t IS TABLE OF c%rowtype INDEX BY pls_integer;
  v_t t;
BEGIN
  OPEN c;
  LOOP
    FETCH c bulk collect INTO v_t limit 500;
    EXIT
  WHEN v_t.count < 500;
  END LOOP;
END;

Notice that the code deliberately leaves out half the data created to satisfy the “no access” criteria for the ILM process. After a little while individual partitions of T1 appeared in V$HEAT_MAP_SEGMENT:

SQL> select object_name, subobject_name, segment_write, segment_read,
  2    full_scan,lookup_scan
  3  from v$heat_map_segment where object_name = 'T1'
 
OBJECT_NAM SUBOBJECT_ SEG SEG FUL LOO
---------- ---------- --- --- --- ---
T1         SYS_P623   NO  NO  YES YES
T1         SYS_P641   NO  NO  YES YES
[...]
T1         SYS_P635   NO  NO  YES YES
T1         SYS_P660   NO  NO  YES NO
 
53 rows selected.

Implementing the ILM policy is done by invoking the alter table command for table T1:

SQL> alter table martin.t1
  2  ilm add policy
  3  row store compress basic segment
  4  after 3 months of no access;
 
Table altered.

The criteria to apply the ILM policy can be based on access, modification, or creation. In this example the policy is applicable to the table partition. The interval can be any number of days, months, or years. In the above example the data in segments (=partitions) with no access will be compressed using BASIC compression. If you are lucky enough to be on an Exadata system you can request Hybrid Columnar Compression as well. You can view policies attached to a table in the view dba_ilmpolicies. More detail for a policy is available in the view dba_ilmdatamovepolicies.

Fast forward three months and you can see that the ILM processes in the database start reviewing the access to the segments more closely. The view dba_ilmevaluationdetails lists the outcome of the evaluation. The system periodically checks ILM policies against the heat map. This process is externalized in dba_ilmtasks. You could join dba_ilmtasks to dba_ilmevaluationdetails for more detail about when a given task has been executed. In the above example the following evaluation results have been made available by the ILM engine:

SQL> select task_id,policy_name,object_name,subobject_name,
  2    selected_for_execution, job_name
  3  from dba_ilmevaluationdetails
  4  order by policy_name,selected_for_execution;
 
   TASK_ID POLICY_NAM OBJEC SUBOBJECT_ SELECTED_FOR_EXECUTION                     JOB_NAME                              
---------- ---------- ----- ---------- ------------------------------------------ ------------
         2 P1         T1    SYS_P658   PRECONDITION NOT SATISFIED
[...]
         2 P1         T1    SYS_P650   SELECTED FOR EXECUTION                     ILMJOB190
         2 P1         T1    SYS_P648   SELECTED FOR EXECUTION                     ILMJOB192

These jobs are executed by the database scheduler, and you can find out about them in dba_ilmresults. The outcome of the operation is visible in the dba_tab_partitions

SQL> r
  1  select partition_name, compression, compress_for
  2  from DBA_TAB_PARTITIONS
  3* where table_name = 'T1'
 
PARTITION_NAME                 COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------------------
SYS_P621                       DISABLED
[...]
SYS_P632                       ENABLED  BASIC
SYS_P660                       ENABLED  BASIC
SYS_P670                       ENABLED  BASIC
[...]
 
54 rows selected.
 
SQL>

As requested the data has been compressed with basic compression.

Automatic storage tiering is the second option you have in ADO. It works slightly differently as it is not based on access but rather on the amount of free space on the tablespace. The ILM engine reviews a tablespace and compares the free space with what it finds in dba_ilmparameters. If the tablespace on which the segment is stored falls below the free space threshold a job will be started moving the data from the higher tier storage to the storage defined in the ILM policy. An example for an ILM policy to change the storage tier is shown here:

SQL> alter table t1
  2  ilm add policy
  3  tier to ARCH segment;
 
Table altered.

During the next maintenance window—the time when the ILM policies are evaluated—jobs will be started to move the partitions to the low cost storage. As with the compression you can see the results of the operation in dba_ilmresults.

Infrastructure Changes

Changes to the way the Oracle database interacts or touches the infrastructure it executes on are of particular interest to the consolidation project. Some features in the 12c release are very relevant, and have been moved into their respective chapters. One of the best features added has been semi-officially introduced with 11.2.0.2 as “clonedb.” But there are others you should know about.

Database Resident Connection Pool Exposed to Middle Tier

The usefulness of exposing the connection pool to the middle tier is not immediately visible to everyone, including the author when he first read about it. Why would anyone create a (Database Resident) connection pool for another connection pool (in the middle tier)? Before explaining the finer details of the feature lets recap on what the Database Resident Connection Pool (DRCP) was intended for. When it was introduced in Oracle 11.2, DRCP allowed programming languages without support for connection pooling in the middle-tier to reduce the cost of establishing connections. At the time DRCP was introduced the popular programming language was PHP, a HyperText Preprocessor. Perl Common Gateway Interface scripts using the popular DBI module could equally have benefitted from the feature but Perl’s importance as a web development language at the time was diminishing quickly.

In short, all (web) applications that are not really modular do not use connection pools, and instead establish a session to the Oracle database as the first thing in their code. These applications were ideal candidates for the DRCP. The idea was to create a connection pool in the database, rather than in the middle tier where such a feature did not exist. The DRCP is somewhere in between a shared server and a dedicated server, but with a lot less code path to go through than MTS. More importantly it is a database feature, unlike the traditional connection pool which exists on the middleware exclusively.

With Oracle 12c the DRCP has been opened up for Java Database Connectivity JDBC. The immediate question that comes to mind is:

JDBC allows you to use connection pooling already in form of the Universal Connection Pool (UCP). Why then should you add a connection pool for the connection pool?

The answer is: scalability! Many connection pools are (inappropriately) designed to use a minimum number of connections set to a value less than the maximum pool size. This is a bad idea since sudden plan changes can cause real havoc or even downtime with the application. Imagine a configuration when 10 middle-tier servers establish 10 sessions each, initially. The increment is another 10 sessions until the maximum size of 100 sessions per pool (=per server) is reached. Now if for some reason an execution plan changes and response times for queries increase it is likely that the application servers will start spawning new sessions to compensate. But instead of 10 times 10 sessions the database now has to deal with potentially 10 times 100 sessions which is a much higher workload. And many of these sessions will actually be busy executing code, further reducing latency and so own. If the DBAs cannot quickly rectify this problem a downward spiral that can even lead to the complete overloading of the database server as shown in Figure 2-6.

9781430244288_Fig02-06.jpg

Figure 2-6. Structural view on the DRCP exposure for JDBC applications

When using a DRCP on the database host this effect can potentially limit the impact of spikes of activity on the underlying infrastructure. As you can see from Figure 2-6, the middle-tier remains the owner of their own connection pools. In addition, each RAC node has its own DRCP to which the connection pools connect. Just as with “plain” DRCP, the effect of connection storms can potentially be better contained.

The correct solution to the problem however would be to reduce the number of processes working on the database and to set the minimum pool size equal to the maximum pool size to avoid unpleasant surprises.

Copy-on-Write for Cloning Databases

One of the problems frequently encountered for the database administrator is the exponential growth in storage requirements. Imagine for a moment that one of your operational databases is 10 TB in size. If the business requires five different test environments for different streams of work than those 10 TB quickly become 60 TB which can no longer be considered a small footprint. The fact that storage is considered cheap can no longer be considered true for everyone. Enterprise class storage is expensive, especially when you need lots. Some companies have long-term agreements with the large storage providers, rendering them less flexible when it comes to offering discounts to internal customers. The high price tag for additional storage has made requisition of additional terabytes or even gigabytes a lengthy process with a lot of red tape around it.

Thankfully there is a way to reduce the storage requirement for test environments, and it is built in with the Oracle database. The benefit of the new technology increases with every additional copy of the live environment. The underlying technology is called copy-on-write. Instead of requiring full copies of the database for each test environment, only one full copy is required. Additional environments are then created based on the clone, but with independent data files. These data files contain only the changes made during testing. Experience shows that test environments rarely change a large portion of the operational database. To benefit from the new method of cloning databases you need to use character based storage: the network file system (NFS). This works hand in hand with many organizations’ desire to make use of cheaper storage tiers for non-critical workloads. You should not be fooled by the lower price though, and perform testing to ensure that the new method of cloning database environments matches your performance expectations. Figure 2-7 shows the minimum setup needed for using clonedb. The production and development server are both connected to an NFS filer using dual 10GBit Ethernet. The NFS filer exports two file systems. The export /m/backups is used for the production database image copy. The other export /m/CLONE will be used for the clone database’s data files and is mounted to the development host. The infrastructure is likely supplemental to an existing backup configuration but it is of course possible to take regular file system backups from the NFS mount. You should check with the business users about their expectations regarding Recovery Time Objects and other details around backups. Normally the hosting platform will use RMAN backups directly, the clonedb infrastructure exists in addition to it.

9781430244288_Fig02-07.jpg

Figure 2-7. The minimum architectural components for copy-on-write cloning

Note the key feature of direct NFS here. Oracle’s direct NFS client—available as part of the 11g release—is entirely written in user mode, potentially allowing environments to benefit from less overhead compared to kernel-NFS. Another great advantage is the availability of direct NFS on Windows, a platform that does not otherwise benefit from NFS.

image Note  The use of the clonedb feature requires the use of direct NFS. The setup of directNFS is out of the scope of this chapter but is available from the author’s website.

Overview

The process to be followed for the new feature is remarkably simple and comprises these steps:

  1. Create an image copy of your life database
  2. Make this backup available on the test server using direct NFS
  3. Prepare the test database
    1. Create a controlfile
    2. Use DBMS_DNFS to rename the data files
  4. Open the database

The following sections provide more detail about the process. You may have seen reference to the above process on My Oracle Support and the Internet before Oracle 12c has been released. However this is the first time that the clonedb feature is in the official documentation set and therefore more generally available. Some of these steps need to be executed each time a clone is needed, others are a one-time setup. Let’s review the process in more detail.

Preparation

Creating an image copy of the database can be done in many ways, beginning with a cold backup which has been taken using the well-known combination of tar and gzip or a simple Unix “cp” command to an RMAN image copy. Below is an example for such a copy command using RMAN. The example assumes that appropriate settings have been made in the RMAN catalog database such as parallelism and destination. The necessary image copy of the database is created on the production host and stored in an NFS mount, namely /m/backups/ which in turn is mounted to /u01/oraback on the production host.

RMAN> backup as copy database format '/u01/oraback/NCDB/%u';

As always it is a good idea to take backups of the database during quieter periods. If you are not using NFS you need to make the backup available on your test database server. To say it in the words of a perl developer: “there is more than one way to do it”. Unlike with the RMAN duplication the location of the backup on the test server does not matter.

Next you will need to prepare the create controlfile statement for the clone database. The controlfile can be created in a number of ways. You either create the controlfile using the command backup controlfile to trace at '/path/to/script.sql', or you use the Oracle-provided script clonedb.pl in $ORACLE_HOME/rdbms/install. The use of clonedb.pl requires you to set a few environment variables, and you can execute the script on the production host or the destination server. The following example demonstrates its use. The environment variables in the below output are required and need to reflect the directory structure on the clone database’s host. The MASTER_COPY_DIR indicates the NFS mount with the production database backup, CLONE_FILE_CREATE_DEST points to the location where you’d like to store the clone database’s data files (which must be NFS mounted to the development host) and finally the database name for the clone database.

[oracle@server2 install]$ perl clonedb.pl
usage: perl clonedb.pl <init.ora> [crtdb.sql] [rename.sql]
[oracle@server2 install]$ export MASTER_COPY_DIR=/media/backup/NCDB
[oracle@server2 install]$ export CLONE_FILE_CREATE_DEST=/u01/oradata/CLONE
[oracle@server2 install]$ export CLONEDB_NAME=CLONE
[oracle@server2 install]$ perl clonedb.pl /tmp/initNCDB.ora /tmp/crtdb.sql /tmp/rename.sql

Review the files created and amend them to suit your needs. The initNCDB.ora file in the above output is the production database’s initialization file. In addition to the two SQL scripts the perl script will create an initialization file for the clone database in the directory indicated by the environment variable CLONE_FILE_CREATE_DEST. The parameter file will also need amending to suit your needs. Following along the example the following parameter file was used:

*.audit_file_dest='/u01/app/oracle/admin/CLONE/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.db_block_size=8192
*.db_domain=''
*.db_name=CLONE
*.db_recovery_file_dest='/u01/fra'
*.db_recovery_file_dest_size=4800m
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLONEXDB)'
*.open_cursors=300
*.pga_aggregate_target=256m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=768m
*.undo_tablespace='UNDOTBS1'
#*.db_create_file_dest=/u01/oradata
*.clonedb=TRUE
*.control_files='/u01/oradata/CLONE/ctrl01.ctl','/u01/oradata/CLONE/ctrl02.ctl'

Please ensure that all the required directories exist, especially the audit file destination. The parameter clonedb must be set to true. If you intend to connect to the database as SYS please create a password file as well. You should also update the oratab file and add the new clone database to it.

Creating the Clone Database

With the parameter file in place connect to the database and execute the crtdb.sql script. For example:

SQL> @crtdb
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.
 
Total System Global Area  801701888 bytes
Fixed Size                  2293496 bytes
Variable Size             318767368 bytes
Database Buffers          478150656 bytes
Redo Buffers                2490368 bytes
SQL> CREATE CONTROLFILE REUSE SET DATABASE CLONE RESETLOGS
  2      MAXLOGFILES 32
  3      MAXLOGMEMBERS 2
  4      MAXINSTANCES 1
  5      MAXLOGHISTORY 908
  6  LOGFILE
  7    GROUP 1 '/u01/oradata/CLONE/CLONE_log1.log' SIZE 100M BLOCKSIZE 512,
  8    GROUP 2 '/u01/oradata/CLONE/CLONE_log2.log' SIZE 100M BLOCKSIZE 512
  9  DATAFILE
 10  '/media/backup/NCDB/NCDB/system01.dbf' ,
 11  '/media/backup/NCDB/NCDB/sysaux01.dbf' ,
 12  '/media/backup/NCDB/NCDB/undotbs01.dbf' ,
 13  '/media/backup/NCDB/NCDB/users01.dbf'
 14  CHARACTER SET WE8ISO8869P15;
 
Control file created.

Notice how the create controlfile statement references the image copies made available to the development server via NFS. Following this step you need to execute the rename.sql script. It will rename the files pointing them to the NFS mount. The directory /m/CLONE—exported from the NFS appliance—is mounted to /u01/oradata/CLONE on the development server.

begin
 DBMS_DNFS.CLONEDB_RENAMEFILE(
   '/media/backup/NCDB/system01.dbf', '/u01/oradata/CLONE/system01.dbf'),
 DBMS_DNFS.CLONEDB_RENAMEFILE(
   '/media/backup/NCDB/sysaux01.dbf', '/u01/oradata/CLONE/sysaux01.dbf'),
 DBMS_DNFS.CLONEDB_RENAMEFILE(
   '/media/backup/NCDB/undotbs01.dbf', '/u01/oradata/CLONE/undotbs01.dbf'),
 DBMS_DNFS.CLONEDB_RENAMEFILE(
   '/media/backup/NCDB/users01.dbf', '/u01/oradata/CLONE/users01.dbf'),
end;
/

This step will only succeed if the destination location is an NFS mount. Although it is theoretically not needed to recreate the create controlfile script every time, you should still develop a habit to do so. If someone added a data file to the production system in the meantime this data file will not be in the controlfile and you have to start over. After completion of this step you can open the database with the resetlogs option. Don’t forget to add temp-files and other auxiliary structures you may need! If you verify the database files now using the “sparse” flag, you will see that they really take little or no space on disk:

[oracle@server2 ∼]$ ls -lsh /u01/oradata/CLONE/*.dbf
total 220K
92K -rw-r-----. 1 oracle oinstall 551M Nov 13 22:45 sysaux01.dbf
16K -rw-r-----. 1 oracle oinstall 211M Nov 13 22:45 system01.dbf
16K -rw-r-----. 1 oracle oinstall 1.6G Nov 13 22:45 undotbs01.dbf
16K -rw-r-----. 1 oracle oinstall 5.1M Nov 13 22:45 users01.dbf

The first column indicates the actual space used, and the familiar figures in the middle of the output show the size the file would occupy if this was a full-sized copy. The space savings are immediately visible.

Deprecation of OEM DB Console

Oracle has removed the Database Console which it introduced with Oracle 10g with immediate effect. There is no further warning: it is gone with Oracle 12c. Although unexpected, this step should not cause trouble to most organizations that rely on centralized monitoring solutions such as OEM 12c Cloud Control. The replacement however is very neat and deserves a closer look, even though it requires Flash for it to operate.

During the database creation the Database Creation Assistant will assign an unused port for DB Express, on most systems this will be 5500. The documentation states that this port is also the HTTPS Port for XML DB, which is no longer an optional component of the database.

image Note  You can get the HTTPS port by issuing the following statement: “select dbms_xdb_config.gethttsport from dual.”

As an interesting side note you can see the port registered with the listener:

[oracle@server1 ∼]$ lsnrctl status
 
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 25-JUL-2013 20:36:31
 
Copyright (c) 1991, 2013, Oracle.  All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                25-JUL-2013 20:24:19
Uptime                    0 days 0 hr. 12 min. 11 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/grid/product/12.1.0.1/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ol64/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server1.example.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=server1.example.com)(PORT=5500))
  (Security=
    (my_wallet_directory=/u01/app/oracle/product/12.1.0.1/dbhome_1/admin/CDB1/xdb_wallet))
   (Presentation=HTTP)(Session=RAW))

Point your web browser to the HTTPS port just detected and wait a few seconds for the interface to load. You are presented with the user interface shown in Figure 2-8.

9781430244288_Fig02-08.jpg

Figure 2-8. DB Express after the initial login

The dashboard is a nice overview of the software’s potential. A lot of the information you can get and set pales in importance when comparing with the “Performance Hub.”

The Performance Hub

The Performance Hub screen, which is available from the Performance drop-down menu on top end of the screen, certainly is the best bit of DB. You need to be appropriately licensed to use the feature. Its overview page is shown in Figure 2-9.

9781430244288_Fig02-09.jpg

Figure 2-9. DB Express Performance Hub

As you can see the summary screen is divided into a number of areas. In this example the time picker has been hidden to free up some more space. When shown it allows you to select an interval over the last hour or alternatively use a historic time period if it is still available. It then presents the options to enter different other views of your system, namely:

  • RAC if applicable
  • (Top) Activity
  • Workload
  • Monitored SQL
  • ADDM
  • Current ADDM findings

We will cover the most interesting ones of these a little later. Continuing the discussion of the summary screen you can see the following panes:

  • Host runnable processes, indicating the load on the server. The contributors of the load are broken down into background and foreground processes plus all the other non-Oracle processes contributing to the current state of the operating system’s run queue.
  • Host memory: shows the database’s SGA and its components relative to the total memory on the system. This is a useful view to show automatic memory operations of individual SGA components.
  • Active Sessions: this panel gives you an overview of the current activity in total or broken down into services and—in case of RAC—instances.
  • I/O: The final pane on this page shows you I/O Requests, Throughput, and Latency.

Hover the mouse over these panes and you will notice they are all interactive.

Top Activity

This screen is like a combination of ASH Analytics you already know from Oracle Enterprise Manager Cloud Control and the Top Activity screen we know from pre-Enterprise Manager 12c. The Top Activity page is shown in Figure 2-10:

9781430244288_Fig02-10.jpg

Figure 2-10. DB Express Performance Hub: ASH Analytics and Top Activity

If you have already worked with ASH Analytics you will immediately feel comfortable with this screen. It allows you to analyze your database’s workload by combining the available dimensions. One possibility is to review a time period on the coarse level, for example using the wait class. Upon identifying a particular wait class consuming more resources than expected you can click on the wait class, further drilling down. Whenever you drill down a level you can see a filter indicating what you are currently looking at. The number of statements in the lower half is also reduced to those contributing mostly to the metric you are looking at.

9781430244288_Fig02-11.jpg

Figure 2-11. DB Express Performance Hub: Drilling down into wait events

You can finally investigate individual SQL statements: use SQL Monitoring, schedule a Tuning Advisor task and so on. Although not ground-breakingly new the Activity page in the Performance Hub is a great tool, visualizing you workloads and—thanks to ASH Analytics—point out performance problems quite easily.

Workload

The final page to be discussed in this section is the Workload page, shown below in Figure 2-12:

9781430244288_Fig02-12.jpg

Figure 2-12. DB Express Performance Hub: Workload

The Workload page allows you to view the application’s workload profile in a very elegant way. The top-left panel shows the user calls, parse calls, redo size, and network traffic. The Sessions panel shows you the rate of logins, number of currently connected users, and number of open cursors.

Shaping of Network Traffic

Another interesting feature affecting the infrastructure services required to run the Oracle database are related to network compression and a change in the maximum size of the Session Data Unit (SDU) size. New parameters in the sqlnet.ora file allow you enable or disable compression. You need to set sqlnet.compression to “on” to enable compression if you have the correct license for this feature. Data Guard streaming redo is exempt from this parameter by the way, but that has little relevance: Data Guard has been able to compress archived logs for gap resolution since 11.1. Oracle 11.2 extended the scope of the compression to redo transmission.

If network compression is enabled via sqlnet.ora then you have the option to further define the aggressiveness of the compression. The parameter sqlnet.compression_levels can take two values: low and high. It is needless to say that a higher compression ratio comes at the cost of increased CPU consumption. To alleviate that effect it is possible to define a threshold in bytes for data compression. Only if the size of the data sent over the wire exceeds the threshold will the database server enable its compression algorithms. The default is to leave anything at less than 1024 bytes alone.

Threaded Architecture

There have been only two major forms of implementation in Oracle’s recent releases. One was the all-threaded model for the Windows platform, and the other one was the process architecture for the UNIX world. Beginning with 12c the UNIX world can benefit from a multi-threaded architecture as well. This is a dramatic change that requires the revision of a popular interview question about the Oracle architecture. Looking back the Oracle architecture has always distinguished between the major process types:

  • The foreground process
  • The background process
  • Potential slave processes

The new threaded architecture allows the administrator to create most background processes as threads within a process. User processes in the dedicated server model will also be created as threads. It should also be stated clearly that the new threaded model is not an option for Oracle on the Windows platform since Oracle has always used that threaded approach on this platform.

The Reason for Threads

Before explaining how the thread architecture is different from the process model it is important to review the actual difference between the two approaches. As with almost anything in the Linux history there have been different implementations for threads. Since kernel 2.6 (its release date now seems like an eternity ago) the native POSIX thread library (NPTL) is the standard. You can see that it is referenced in the make.log file which lists all the output of the linking phase. Search for -lpthread to see which binaries are linked against it. The main executable certainly links against it:

[oracle@server1 ∼]$ ldd $(which oracle) | grep thread
        libpthread.so.0 => /lib64/libpthread.so.0 (0x000000314f600000)

This library is part of the standard C library on Linux and therefore is available by default. But why has Oracle gone to the great length to implement the background processes using threads? First, a thread is an independent execution unit within a process. In the most basic case there will be a one-to-one mapping between processes and threads. Such an implementation however would not make much sense. Instead most models—including the one described here—use multiple threads per process. Threads share the address space and are not as independent as processes, but on the other hand allow for faster context switching. Using a threaded model of execution could help with scalability as you can see later.

The Process Model

The default mode of operation in 12c on Unix is still the process model. The main background processes are listed in the dynamic performance view v$bgprocess. Reviewing the process model allows me to remind you of the many background processes that have been added with every release! On a typical system you will find the background processes using the following query:

SQL> select name,description
  2  from v$bgprocess
  3  where paddr <> '00';

Each of the processes you will see when executing the query on your system corresponds to exactly one process on the operating system. Oracle uses column SPID in view v$process to pair a process in the database to its corresponding process on the operating system. To change from the process model to the threaded model you need to change an initialization parameter:

SQL> alter system set threaded_execution=true scope=spfile;
 
System altered.

This parameter is not dynamic and you have to restart the instance for it to take effect. There is one caveat though: operating system authentication no longer works with the threaded model.

Preparing for Threaded Execution

When you simply restart the instance after changing the initialization parameter you will be in for a surprise after you issue the “startup” command:

[oracle@server1 ∼]$ $ORACLE_HOME/bin/sqlplus / as sysdba
[...]
Connected to an idle instance.
SQL> startup
ORA-01017: invalid username/password; logon denied

You cannot mount or open the database by using the bequeath protocol. Instead, you must use a password file and connect over the network. Continuing from the previous output:

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> alter database mount;
 
Database altered.
 
SQL> alter database open;
 
Database altered.

If you have not yet created a password file you definitely should do so if you want to use the threaded execution mode. Additionally you should consider statically registering your databases with the listener. Statically registered databases allow you to connect to the database using a net*8 connection and still start an instance even if there was no pmon process to register with the listener.

Administrative connections to the database are not the only area where changes have occurred. Using the threaded model the architecture changes; you still have the same background processes as before, but they look very different. Using the ps command to view the processes you will be interested to find that most of them have been combined:

[oracle@server1 ∼]$ ps -ef | grep -i cdb1
oracle   22107     1  0 10:11 ?        00:00:00 ora_pmon_CDB1
oracle   22109     1  0 10:11 ?        00:00:00 ora_psp0_CDB1
oracle   22114     1 17 10:11 ?        00:00:58 ora_vktm_CDB1
oracle   22118     1  1 10:11 ?        00:00:04 ora_u004_CDB1
oracle   22124     1 12 10:11 ?        00:00:42 ora_u005_CDB1
oracle   22130     1  0 10:11 ?        00:00:00 ora_dbw0_CDB1

Instead of a one-to-one mapping of entries in v$bgprocess to operating system processes, there is now a mapping between the thread and the process. To see what’s hiding under the ora_u004_CDB1 process you can use a custom report format when invoking the ps command:

[oracle@server1∼]$ ps -eLo user,pid,tid,nlwp,class,rtprio,comm:30  | grep 22118
USER       PID   TID NLWP CLS RTPRIO COMMAND
oracle   22118 22118   13 TS       - ora_scmn_cdb1
oracle   22118 22119   13 TS       - oracle
oracle   22118 22120   13 TS       - ora_gen0_cdb1
oracle   22118 22121   13 TS       - ora_mman_cdb1
oracle   22118 22127   13 TS       - ora_dbrm_cdb1
oracle   22118 22131   13 TS       - ora_lgwr_cdb1
oracle   22118 22132   13 TS       - ora_ckpt_cdb1
oracle   22118 22133   13 TS       - ora_lg00_cdb1
oracle   22118 22134   13 TS       - ora_lg01_cdb1
oracle   22118 22135   13 TS       - ora_smon_cdb1
oracle   22118 22140   13 TS       - ora_lreg_cdb1
oracle   22118 22141   13 TS       - ora_rbal_cdb1
oracle   22118 22142   13 TS       - ora_asmb_cdb1
[oracle@server1 ∼]$

The output shows that the process with OSPID 22118 has 13 lightweight processes (column NLWP), and their lightweight process IDs are shown in column TID which stands for Thread ID. The “traditional” process names are still available in the “command” column. If you examine the output of the first ps command can see from the above listings there are still some processes left.

[oracle@server1 ∼]$ ps -eLf | egrep  -i "uid|cdb1" | 
> awk ' { if ($6 == 1 || $1=="UID")  print}'
UID        PID  PPID   LWP  C NLWP STIME TTY          TIME CMD
oracle   22107     1 22107  0    1 10:11 ?        00:00:00 ora_pmon_CDB1
oracle   22109     1 22109  0    1 10:11 ?        00:00:01 ora_psp0_CDB1
oracle   22114     1 22114 17    1 10:11 ?        00:01:52 ora_vktm_CDB1
oracle   22130     1 22130  0    1 10:11 ?        00:00:00 ora_dbw0_CDB1
 [oracle@server1 ∼]$

Process monitor (PMON), the virtual keeper of time (VKTM), process spawner (PSP0), and database writer (DBW0) processes are still implemented as traditional UNIX processes. Notice the absence of the SMON process! The view v$process has been changed slightly to cater to the new execution model. Comparing Oracle’s view on its processes with the operating system yields the following result:

SQL> select pid,sosid,spid,stid,execution_type,pname
  2  from v$process
  3  where background = 1
  4    and spid = 22118
  5  order by execution_type;
 
       PID SOSID           SPID       STID       EXECUTION_ PNAME
---------- --------------- ---------- ---------- ---------- -----
         5 22118_22120     22118      22120      THREAD     GEN0
         6 22118_22118     22118      22118      THREAD     SCMN
         7 22118_22121     22118      22121      THREAD     MMAN
        11 22118_22127     22118      22127      THREAD     DBRM
        14 22118_22131     22118      22131      THREAD     LGWR
        22 22118_22142     22118      22142      THREAD     ASMB
        16 22118_22133     22118      22133      THREAD     LG00
        17 22118_22134     22118      22134      THREAD     LG01
        18 22118_22135     22118      22135      THREAD     SMON
        20 22118_22140     22118      22140      THREAD     LREG
        21 22118_22141     22118      22141      THREAD     RBAL
        15 22118_22132     22118      22132      THREAD     CKPT
 
12 rows selected.

The output of the above command has to be interpreted depending on the execution type. For a “PROCESS” nothing changes from the way previous Oracle releases handled processes on UNIX. If an entry in v$process has an execution type of “THREAD” then the output has to be read as follows:

  • The column SPID still references the process on the operating system level with the exception that such a process can and will have multiple sub-processes
  • STID is the thread number as shown in the output of ps -eLf, column LWP
  • SOSID is simply the concatenation of the OSPID with the STID

Thankfully Oracle still provides us with the process name in the view as well. The use of the threaded execution model reduces the overhead of process attaching to the SGA. In a very simple example the same database is started with threaded_execution set to false, and then with the parameter set to true. Without any user connecting, the IPC statistics after the startup command completed showed the following:

[oracle@server1 ∼]$ ipcs -m
 
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x74464570 1451851778 oracle     640        35651584   50
0x00000000 1451884547 oracle     640        3204448256 25
 
[oracle@server1 ∼]$

The number of attached processes to the shared memory segments (=SGA) was 105. Restarting the database with the threaded model enabled shows a different picture.

[oracle@server1 ∼]$ ipcs -m
 
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x74464570 1221394434 oracle     640        35651584   14
0x00000000 1221427203 oracle     640        3204448256 7

The number of attached processes is down to 21. This can have a positive effect on consolidated environments. In addition to the benefits of using threads such as simplified memory management and faster context switching there has been significant work on the Linux kernel scheduler to allow scalability to hundreds of thousands of threads since NPTL became mainstream.

Summary

There are many really interesting new features in Oracle 12c. Some have made their way into this chapter; others could not to keep the page count in check. As with every new database release it is easiest to get started with the new features guide. This document will introduce the new features by functional area and point you to further information in the respective guide. Every document in the documentation has a “what is new ... ” section at the beginning. This section provides more information than the new features guide and again points you to the relevant chapter.

Of all the available new features this chapter presented a taste for what you can expect with the latest version of the database. Some of the contents here will be referenced in other chapters, especially the threaded execution model and Automatic Storage Management. In the author’s opinion ASM is a great, simple, and most importantly unified storage mechanism the DBA can employ on any platform currently supported. By implementing ASM the agility of provisioning storage will improve, and thanks to the online (almost) everything, stripe-and-mirror approach ASM is well suited for large deployments, be it single instance or a Real Application Cluster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset