Chapter 4. Schema Consolidation in Enterprise Manager 12c

Schema as a service provides database consolidation by allowing administrators to host multiple application schemas within a single database. This offers capability database as a service (DBaaS) to possibly hundreds of application users without creating database sprawl. Users can perform day-to-day actions such as provisioning, monitoring, and backup all from a single self-service console. Schema as a service leverages database features such as Oracle Resource Manager and Oracle Database Vault to isolate users of the cloud. These features are complemented by metering and showback/chargeback capabilities, which provide visibility and transparency into resource consumption by each user.

However, schema as a service also has problems, the main one of which (covered in detail later in this chapter) is namespace collision. Having said that, until the introduction of Oracle Database 12c with the multitenant option, schema as a service was really the only viable way of consolidating multiple applications into a single database, and many Oracle customers used it successfully.

Before we look at the details of using schema as a service, there are two areas we need to cover:

• We need to have an understanding of the components that make up the underlying architecture. This is important for both schema consolidation (covered in this chapter) and database consolidation (covered in Chapter 5, “Database Consolidation in Enterprise Manager 12c”).

• We also need to understand the deployment issues that have to be addressed when undertaking a consolidation exercise. Again, these are common to both schema and database consolidation, but the way we address those issues will vary between the different consolidation models. We’ll come back to these deployment issues after we’ve discussed setting up and using schema as a service.

Let’s start by looking at the architecture and components.

Architecture and Components

Oracle Enterprise Manager Cloud Control 12c delivers the complete spectrum of database consolidation, as depicted in Figure 4.1.

Figure 4.1. Consolidation models

Image

In Oracle terminology, hosts containing monitored and managed targets are grouped into logical pools. These pools are collections of one or more Oracle database homes (used for database requests) or databases (used for schema requests). A pool contains database homes or databases of the same version and platform—for example, a pool may contain a group of Oracle Database 12.1.0.1 container databases on Linux x86_64.

Pools can in turn be grouped into zones. In the DBaaS world, a zone is typically comprised of a host, an operating system, and an Oracle database. In a similar vein, when defining middleware as a service (MWaaS) zones, a zone comprises a host, an operating system, and an Oracle WebLogic application server. Collectively, these MWaaS and DBaaS zones are called platform as a service (PaaS) zones. Users can perform a few administrative tasks at the zone level, including starting and stopping, backup and recovery, and running chargeback reports for the different components making up a PaaS zone.

In the DBaaS view of a PaaS zone, self-service users may request new databases, or new schemas in an existing database can be created. The databases can be either single instance or a Real Application Clusters (RAC) environment, depending on the zones and service catalog templates that a user can access.

Diagrammatically, these components and their relationships are shown in Figure 4.2.

Figure 4.2. Components of a platform as a service zone

Image

Schema as a Service Setup

Now that we’ve looked at the architectural components that make up a consolidation environment, let’s look at the details of how it is set up.

Schema as a service can be used to provide profiles and service templates for both an empty schema service and a schema service with seed data. In each case, the cloud infrastructure setup is very similar to the pluggable database as a service (PDBaaS) model, which we discuss in the next chapter. Typically, these steps are done only once and consist of the following:

• Configuring the software library

• Defining roles and assigning them to users

• Creating PaaS zones and pool

• Configuring request settings for the DBaaS cloud

• Configuring quota for the self-service roles.

The only difference with schema as a service from the setup perspective is how the pool is defined. For schema as a service, you must define a different pool, which will contain a pool of databases to which the schemas are deployed.

Before we drill into the details, let’s talk about the environment we’ll be using to set this up. It’s a fairly simple environment consisting of two hosts, host1 and host2. host1 contains the production (prod) database, which is our reference or master database containing the HR schema, which is the schema we will be replicating with schema as a service. host2 contains the test database. The databases need to exist before we set up schema as a service. Diagrammatically, it looks like what is shown in Figure 4.3.

Figure 4.3. Schema as a service environment

Image

Creating a Directory Object

Schema as a service uses a directory object to export the data from the reference database—in our case, the HR schema in the prod database—so we need to create a directory object and grant HR read/write access to it. Obviously, you can do that through SQL*Plus, but I’ll be honest—I can never remember the syntax, so it’s quicker for me to do it through Enterprise Manager than to look up the syntax in the documentation.

1. This step is done from the prod database home page by following the path Schema → Database Objects → Directory Objects. After logging in, you see the screen, shown in Figure 4.4, where you can click the Create button.

Figure 4.4. Creating a directory object, step 1

Image

2. On the Create Directory Object page, you need to provide a name for the directory object (in this case, I’m using DBaaS_Schema_Export) and an operating system path to the directory you will be using (e.g., /u01/oracle/schema_export). You can click the Test File System button to validate the path is correct (see Figure 4.5).

Figure 4.5. Creating a directory object, step 2

Image

3. On the next screen, you are shown the host name and Oracle Home location, and you can enter credentials to validate the path. In this case, you already have a named credential, so you select that and click Login (see Figure 4.6).

Figure 4.6. Creating a directory object, step 3

Image

4. The system will now log in as that user and validate that the path exists. Provided you have not made any typos, you should see a message that the directory exists, and you can simply click Return (see Figure 4.7).

Figure 4.7. Creating a directory object, step 4

Image

5. You need to assign the correct privileges on the directory object to the HR user. To do that, you click the Privileges tab (see Figure 4.8).

Figure 4.8. Creating a directory object, step 5

Image

6. A list of users defined in the prod database appears. You can scroll through that list until you find the HR user, then select it and click OK (see Figure 4.9).

Figure 4.9. Creating a directory object, step 6

Image

7. Select both Read Access and Write Access, and click OK (see Figure 4.10).

Figure 4.10. Creating a directory object, step 7

Image

8. You should now see a message that the directory has been created successfully, and the directory should be listed in the directory objects list as well (see Figure 4.11).

Figure 4.11. Creating a directory object, step 8

Image

Creating a Database Pool

Now that we’ve created the directory object, we can go through the rest of the process of setting up schema as a service.

1. Start by following the path Setup → Cloud → Database (see Figure 4.12).

Figure 4.12. Creating a database pool, step 1

Image

2. As mentioned earlier, we need a database pool that has been created specifically for Schema as a Service. For this task, you select For Schema from the Create dropdown list (see Figure 4.13).

Figure 4.13. Creating a database pool, step 2

Image

3. We need to provide the pool details, credentials, and PaaS infrastructure zone details, then click Add to select databases for the pool (see Figure 4.14).

Figure 4.14. Creating a database pool, step 3

Image

4. On the Select and Add Targets pop-up, you can select multiple databases to add to the pool, and in a real cloud environment, you normally would do exactly that. In the small-scale lab that I’m using, I simply select the TEST database row and click Select (see Figure 4.15).

Figure 4.15. Creating a database pool, step 4

Image

5. Back on the Setup page, you have now entered all the values you need, so you just click Next (see Figure 4.16).

Figure 4.16. Creating a database pool, step 5

Image

6. The Policies page is where you set up the placement policies for the resources in the pool. You can place constraints on the maximum number of schemas that can be created on a database in the pool (via the maximum number of database services), as well as set up the maximum workload parameters for each service. The placement algorithm uses these parameters to determine in which database the schema is placed (when there is more than one database in the pool, obviously). In the example shown in Figure 4.17, you have set the maximum number of database services to 15 and enabled the workload limitations and Resource Manager. All that remains to do is click Submit.

Figure 4.17. Creating a database pool, step 6

Image

7. You will then see a message that the Database Pool has been created successfully (see Figure 4.18).

Figure 4.18. Creating a database pool, step 7

Image

Creating a Profile and Service Template

Now that we’ve created the pool, the next step is to create a Profile and a Service Template for schema as a service.

1. To start this process, click Profiles and Service Templates (see Figure 4.19).

Figure 4.19. Creating a profile, step 1

Image

2. First, create a profile by clicking Create in the Profiles region (see Figure 4.20).

Figure 4.20. Creating a profile, step 2

Image

3. The first screen of the wizard requires you to select the magnifying glass to search for a reference target (see Figure 4.21).

Figure 4.21. Creating a profile, step 3

Image

4. In a small environment, you can already see the prod database listed, but you could search for it if there were lots of targets. Once that row is chosen, you click Select (see Figure 4.22).

Figure 4.22. Creating a profile, step 4

Image

5. On the Create Database Provisioning Profile: Reference Target page (see Figure 4.23), two regions need attention. First, for the Reference Target region, uncheck the Database Oracle Home checkbox, click the Structure and Data radio button, and select the Export Schema Objects radio button. Second, in the Credentials region, provide the relevant named credentials for the host and the database. Then click Next.

Figure 4.23. Creating a profile, step 5

Image

6. On the Content Options wizard step, choose the HR schema from the Available Schemas list and move it to the Selected Schemas list (see Figure 4.24).

Figure 4.24. Creating a profile, step 6

Image

7. The Dump region tells you where the export files for the schema being exported will be placed. In my environment, this directory is an NFS mount point I used when I created the directory object earlier. Click Add to specify the dump directory (see Figure 4.25).

Figure 4.25. Creating a profile, step 7

Image

8. Next, choose the row containing the directory object you created earlier, and click Select (see Figure 4.26).

Figure 4.26. Creating a profile, step 8

Image

9. Back on the Directory Locations region, you need to specify the log directory. Personally, I would prefer that this default to the same directory added for the dump file—maybe I should add that as an enhancement request! You can click on the magnifying glass to do this step (see Figure 4.27).

Figure 4.27. Creating a profile step, 9

Image

10. As you haven’t created a separate directory object for the logs to go to, you simply select the same directory object again and click Select (see Figure 4.28).

Figure 4.28. Creating a profile, step 10

Image

11. If you are exporting a decent sized data set, you can also specify the degree of parallelism for the export job. However, the HR schema isn’t particularly large, so we leave the degree of parallelism at the default of 1 and click Next (see Figure 4.29).

Figure 4.29. Creating a profile, step 11

Image

12. On the next screen, give the profile a meaningful name and click Next (see Figure 4.30).

Figure 4.30. Creating a profile, step 12

Image

13. On the Review step, you can double check that everything is as expected, and then click Submit (see Figure 4.31).

Figure 4.31. Creating a profile, step 13

Image

14. At this stage, a procedure is created and executed, and you are redirected to the Procedure Activity screen. You can click View → Expand All to see all the steps that will be executed in the procedure. You can also change the View Data refresh rate in the top right corner so you can see the procedure activity status refreshing until it is complete. Once the procedure completes successfully, you’ll see a screen like the one shown in Figure 4.32.

Figure 4.32. Creating a profile, step 14

Image

Now the profile you’re going to use is created. Next, you need to create a service template using the following steps:

1. Go back to the Database Cloud Self Service Portal Setup page by following the path Setup → Cloud → Database (see Figure 4.33).

Figure 4.33. Creating a service template, step 1

Image

2. Again, you click Profiles and Service Templates (see Figure 4.34).

Figure 4.34. Creating a service template, step 2

Image

3. This time, you want to select For Schema from the Create dropdown list in the Service Templates region (see Figure 4.35).

Figure 4.35. Creating a service template, step 3

Image

4. Provide a meaningful name and description for the service template, then click the magnifying glass to select a profile to import the schema from (see Figure 4.36).

Figure 4.36. Creating a service template, step 4

Image

5. Select the DBaaS_Schema_Profile profile you created in the previous section and click Select (See Figure 4.37).

Figure 4.37. Creating a service template, step 5

Image

6. Back on the General step of the wizard, you would normally select a “master account” that will have the necessary privileges to manage objects owned by other schemas in the export. In this case, of course, the profile has only one schema in the export, so the master account should be automatically set to HR. Make sure that HR has been selected, and click Add in the Zones region (see Figure 4.38).

Figure 4.38. Creating a service template, step 6

Image

7. In my example, I’m using the East Coast Zone, so I select that row and click Select (see Figure 4.39).

Figure 4.39. Creating a service template, step 7

Image

8. Select the East Coast Zone again, and this time click the Assign Pool button to assign a pool to the zone (see Figure 4.40).

Figure 4.40. Creating a service template, step 8

Image

9. This time, you select the DBaaS_Schema_Pool pool, and click Select (see Figure 4.41).

Figure 4.41. Creating a service template, step 9

Image

10. Now you need to set the Shared Location. The Shared Location is a filesystem where the export files are located, so click the magnifying glass next to the Shared Location field (see Figure 4.42).

Figure 4.42. Creating a service template, step 10

Image

11. Locate the OS directory you used for the directory object created earlier, then click OK (see Figure 4.43).

Figure 4.43. Creating a service template, step 11

Image

12. That’s all you need to provide on the General wizard step, so you can click Next (see Figure 4.44).

Figure 4.44. Creating a service template, step 12

Image

13. On the Configurations step, you want to set up different workload sizes that can be chosen by the self-service user at runtime, based on the CPU, memory, and storage requirements of a particular service. To do this, you click Create (see Figure 4.45).

Figure 4.45. Creating a service template, step 13

Image

14. In this case, the workloads are likely to be fairly small, so allocate 0.03 cores, 0.03 GB of memory, and 1 GB of storage at the low end, and click Create (see Figure 4.46).

Figure 4.46. Creating a service template, step 14

Image

15. Likewise, you can create a couple of other workloads by repeating the same steps. In the Schema Privileges region, you can provide a name for the database role that will be associated with the master account for the service, and you can define a tablespace that will be created for the service as well. In my example, I’ve left the default role name and set the tablespace to be specified by the workload size selected at request time. Then click Next (see Figure 4.47).

Figure 4.47. Creating a service template, step 15

Image

16. On the next step of the wizard, you can set scripts to be run before and after creation and/or deletion of the service instance. We are not going to do that, so just click Next (see Figure 4.48).

Figure 4.48. Creating a service template, step 16

Image

17. A service template can be configured with one or more roles, so click Add to add the DBAAS_CLOUD_USERS role created earlier (see Figure 4.49).

Figure 4.49. Creating a service template, step 17

Image

18. Select the row for that role, and click Select (see Figure 4.50).

Figure 4.50. Creating a service template, step 18

Image

19. That’s all you need to do on the Roles step, so just click Next (see Figure 4.51).

Figure 4.51. Creating a service template, step 19

Image

20. Finally, you can review all the settings for the Service Template, and click Create to create the new service template (see Figure 4.52).

Figure 4.52. Creating a service template, step 20

Image

21. You should now see a message that the service template has been created successfully, and see the template listed in the Service Templates region (see Figure 4.53).

Figure 4.53. Creating a service template, step 21

Image

Now we are done. The next step is to start using schema as a service.

Using Schema as a Service

The first step when using use the Database Cloud Self Service Portal with schema as a service in Enterprise Manager 12.1.0.4 is to log in as the self-service user (not the self-service administrator [SSA]).

1. Provide the right username and password, and click Login (see Figure 4.54).

Figure 4.54. Using schema as a service, step 1

Image

2. By default, you are taken to the Infrastructure Cloud Self Service Portal page. Select Databases from the Manage dropdown list (see Figure 4.55).

Figure 4.55. Using schema as a service, step 2

Image

3. Next, request a schema from the Database Service Instances region (see Figure 4.56).

Figure 4.56. Using schema as a service, step 3

Image

4. On the Select Service Template pop-up, select the DBaaS Schema Service with Data template you created earlier, and click Select (see Figure 4.57).

Figure 4.57. Using Schema as a Service, step 4

Image

5. On the Create Schema page, you need to provide information for three regions:

General: In this region, you provide a request name, select the zone the schema will be created in, provide a database service name, and choose a workload size from the workloads we created earlier.

Schedule Request: This is where you define when the request will start and an end date after which the service instance will be removed. You also have the option to keep the service instance indefinitely.

Schema Details: Here you define schema details, the master account, and the tablespace that will be defined as part of the service instance. While most of the other information you provide is self-explanatory, some of this region may be a bit unclear, so let’s look at these fields in more detail:

Schema Details: The schemas that will be created as part of this self-service request, which is dependent on the service template chosen. You can choose to create multiple schemas at once if your template was based on that, but in this example, I only selected the HR schema. Each schema will be remapped to another name, based on the provided prefix, so in this example, you will end up with a new schema called DBAAS_HR. Note also that you can choose to have different passwords for each schema if your request has multiple schemas, or alternatively, if you’re lazy like me, you could keep the same password for each schema. Obviously, it’s better from a security perspective to not be lazy.

Master account: The master account is the account that has privileges over all the schemas created as part of this service request.

Tablespace: This is the name of a tablespace that will be created to contain the schema objects as part of the service request.

The fields on this page that are marked with an asterisk (*) are mandatory fields, so you need to make sure you provide values for those fields at least. Once you have filled those in, you just click Submit to start the service request processing (see Figure 4.58).

Figure 4.58. Using schema as a service, step 5

Image

6. Back on the Database Cloud Self Service Portal page, you can swap the refresh rate from manual to every 30 seconds (see Figure 4.59).

Figure 4.59. Using schema as a service, step 6

Image

7. You should also notice the Usage region has been updated to reflect the newly submitted request (see Figure 4.60).

Figure 4.60. Using Schema as a Service, step 7

Image

8. After a short period of time, you will notice the HR2_Service instance has now been added to the list of Database Service Instances. If you want to see more details, you can click on the link in the Status column for the Requests region. (Depending on the screen refresh timing, you will see either the word Running or the word Success there—the fact that you now have a new instance in the Database Service Instances list is your real indication that the service instance was created successfully.) You should also notice that we actually added two requests in this case—one is to create the service instance and the other is to remove it, as we had specified a duration of 7 days for the service instance lifetime (see Figure 4.61).

Figure 4.61. Using schema as a service, step 8

Image

9. If you click on either Running or Success, you can see the Request Details pop-up (see Figure 4.62).

Figure 4.62. Using schema as a service, step 9

Image

Selecting any of the execution steps will show more details in the Execution Details region for that particular step. You can also see that some steps weren’t needed (like the custom scripts), so they will show a status of Skipped. You can click OK to close that window.

Deployment Issues

Now that we understand the architecture and components that are used in the different consolidation models and how schema as a service is both set up and used, we need to examine some standard deployment issues that must be addressed. These issues include security, operational, resource, and fault isolation, as well as scalability and high availability. See Chapter 1, “Database as a Service Concepts—360 Degrees,” for definitions of these terms. Here we look at how each of these issues affects schema as a service.

Security Isolation when Using Schema as a Service

In the schema as a service environment, the effect of granting a privilege or role is contained to the schema where the grant was made, thus ensuring greater security. One of the main issues (if not the main issue) in a schema as a service environment is namespace collision. Namespace collision may be mistakenly resolved by creating public synonyms, which of course is not recommended from a security perspective. The result is that while, by itself, schema as a service does not lead to reduced security, the decisions of the administrator can end up meaning security is decreased when compared to PDBaaS.

For most configurations, Oracle’s out-of-the-box database security profiles are sufficient to limit access to data in the schema as a service environment. However, it is also possible to provide deeper security using functionality such as encryption, Database Vault, and Audit Vault.

Operational Isolation when Using Schema as a Service

From a backup and recovery perspective, schema as a service tablespaces can be both backed up and recovered individually, even recovered to different points in time. This capability increases the operational isolation significantly.

As more schemas are consolidated into a single database, operations that affect an ORACLE_HOME will affect more schemas. However, this drawback is offset to a certain extent by the ease with which transportable tablespaces can be used to move the schemas to a different database. Having said that, moving a schema in this way is not quite as straightforward as moving a pluggable database, so schema as a service doesn’t rank as highly as PDBaaS for that reason. In addition, schema-based consolidation lacks isolation from both the database lifecycle management and independence of patching and upgrades perspective.

However, the area that makes schema as a service much more difficult from an operational perspective is the issue of namespace collision. Namespace collision occurs because a single database cannot contain multiple copies of the same database object in a single schema. Namespace collision means there can only be one EMPLOYEES table owned by the HR user at any one point in time.

For schema as a service, this is not so much a concern at the database layer as it is at the application layer. The implementation of schema as a service in Enterprise Manager requires you to provide a schema prefix when you issue a Create Schema as a service instance request, and it creates the new schema with that prefix.

For example, if you were using the HR schema from the prod database to create a copy of the HR schema in the test database, and you provided the schema prefix MYHR, the test database would have a schema called MYHR_HR owning a copy of the prod HR database objects.

From the database perspective, then, we can indeed create multiple copies of a schema in a single database, and each schema will be named using the naming convention SCHEMA_PREFIX_ORIGINAL_SCHEMA_NAME, removing the issue of namespace collision.

However, from an application perspective, there is clearly still an issue, as any application based off the original HR schema will expect the objects to be owned by HR. There are at least three ways to address this issue:

Private synonyms: For each user in the database who will be accessing the HR application, create a set of private synonyms for each object used in the HR application. This task would need to be performed for every object used in the HR application and for every user who would be accessing the application. Obviously, it can be done in a scripted manner but still involves manual intervention by the database administrator.

Public synonyms: One way to address the need for creating private synonyms for every user, as just shown, is to use public synonyms instead. However, by their very nature, only one public synonym can be created for an object owned by a specific user, so this approach removes the ability to consolidate multiple schemas into a single database and therefore is not really a resolution we can use.

Logon trigger: Create a logon trigger for each user who will use the application to include a statement of the form ALTER SESSION SET CURRENT_SCHEMA=MYHR_HR. Again, this would require manual intervention after the schema has been created.

Of course, we could also modify the application code to change every HR schema reference to MYHR_HR, but that is rarely something that is easily achieved. The end result of this approach is that, from an application layer, namespace collisions cause a lot of difficulty in the schema as a service paradigm.

However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed greatest consolidation to occur, and a number of customers have successfully used one of the preceding namespace collision resolutions in production environments.

Resource Isolation when Using Schema as a Service

Schemas created by schema as a service are just the same as any other database schema. As a result, it is quite simple to use Oracle Resource Manager to create resource consumer groups, map sessions to those groups, and then assign resources to those groups based on resource plan directives.

However, because you do not know which database schemas will be created in when using schema as a service, these sorts of methods are all interventions by the DBA after the schema creation. Following are two methods that do not require this manual intervention after the service has been created:

1. Create and select workloads sensibly. If more than one workload is created, SSA users can specify the workload size that will best meet their requirements.

2. Proper define placement constraints at the database pool level. When a database pool is created, the self-service administrator can set maximum ceilings for resource utilization as placement constraints. These constraints can define

a. The maximum number of database services for each database.

b. The maximum CPU allocation for the service request.

c. The maximum memory allocation for the service request.

The service instance will then be provisioned on the member that best satisfies these placement constraints.

Fault Isolation when Using Schema as a Service

Fault isolation in a schema as a service request is normally provided at the schema level, so application faults in one schema will not cause other applications to fail.

It is also possible that login storms or an improperly configured midtier will impact other applications.

In addition, the more schemas that are consolidated into a single database, the more impact a fault at the database level will have. Of course, there are some faults (such as dropping a table incorrectly) that can be resolved at the schema level, thus isolating the fault from other schemas in the same database.

Once a fault has been isolated and resolved, there are two parts of the database architecture that allow fast recoverability and thus smaller mean time to repair (MTTR) in any database, including one used with schema as a service.

1. Flashback functionality, including both Flashback Drop and Flashback Table:

a. Flashback Drop allows you to reverse the effects of dropping a table, including any dependent objects such as triggers and indexes.

b. Flashback Table allows you to undo the effects of accidentally removing (or indeed adding) some or all of the contents of a table, without affecting other database objects. This feature allows you to recover from logical data corruptions (such as adding or deleting rows from the table) much more quickly than you might otherwise.

2. Point-in-time recoverability can be performed at the individual tablespace level, so if you have multiple schemas affected by an issue, you can issue parallel point-in-time recovery commands to improve MTTR.

Scalability when Using Schema as a Service

Scalability is a fundamental characteristic of DBaaS architectures by virtue of their support for self-service, elasticity, and multitenancy. Oracle’s database technologies provide a number of different ways to support scalability when delivering database services, all of which are applicable in schema as a service. These include

• Resource management/quality of service.

• Addition of extra storage through such functionality as multiple Exadata Database Machine frames.

• Horizontal scaling via RAC when service demands increase beyond the capabilities of a single machine.

• Scalable management resources where Enterprise Manager can add management nodes as the number of targets under management grows.

High Availability when Using Schema as a Service

As we discussed in Chapter 1, not all consumers require the same level of availability in a cloud environment. Oracle provides different levels of availability to accommodate the unique needs of consumers in the cloud environment. Table 4.1 (reproduced from Chapter 1 for convenience) shows the availability levels offered through Oracle’s service plans.

Table 4.1. Availability Levels

Image

Summary

In this chapter, we looked at the architecture and components that make up a consolidated environment, as well as the deployment issues that need to be faced when undertaking a consolidation exercise. We also walked through the details of setting up and using schema as a service.

From the application layer, namespace collisions can cause substantial difficulty in the schema as a service paradigm. However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed the greatest consolidation to occur, and a number of customers have successfully used one of the namespace collision resolutions outlined in this chapter in production environments. In the next chapter, we cover using PDBaaS, which addresses the namespace collision issue very successfully.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset