We started our Oracle Real Application Cluster (RAC) database cloud computing journey in Chapter 9 with Oracle VM (OVM) for x86 and reached the point where we had an Oracle grid infrastructure set up in a virtualized environment, based on Oracle VM for x86. The grid infrastructure serves as a consolidated Clusterware and storage (Automatic Storage Management, or ASM) platform for hosting and serving clustered RAC databases in the database cloud.
We continue our trek in this chapter by repeating some of the same steps with OVM VirtualBox instead of OVM for x86. This alternative approach presents you with the advantage of choice; you can pick a virtual solution based on your requirements. OVM enables you to virtualize RAC and set up database clouds in a corporate environment, whereas OVM VirtualBox enables you to do the same but on your own laptop, thereby eliminating the need for expensive hardware to set up, configure, and deploy Oracle RAC. The OVM VirtualBox approach is a great way of learning RAC in a quick and easy fashion, all implemented with the do-it-yourself methodology shown in this chapter.
Following this line of action, we resume our journey with OVM VirtualBox up to the point of setting up Oracle grid infrastructure. Once that stage is achieved, the subsequent steps in this chapter, detailing setting up RAC databases in the Oracle database cloud, apply equally to both approaches, OVM for x86 and OVM VirtualBox.
We also delve into cloud computing in detail from the perspective of Oracle. Some of this material is covered in the earlier chapters, but the information is worth reiterating.
As would have already been observed by now, if you have surfed through the last chapter, a step-by-step approach is followed in order to give you a 360-degree, A–Z roadmap for setting up virtualized Oracle RAC database clouds. You are free to choose from one of two virtualization solutions—OVM for x86 or OVM VirtualBox.
Following is a summary of topics presented in this chapter:
• OVM VirtualBox: A Brief Introduction
• What Is Cloud Computing? Synopsis and Overview
• Oracle’s Strategy for Cloud Computing
• EM12c and OVM—The Management and Virtualization Components for Oracle Database Clouds
• RAC Private Cloud on OVM VirtualBox—Software and Hardware Infrastructure Requirements
• Setting Up Virtualized Oracle RAC Clusters on OVM VirtualBox—Alternative Approaches
• Setting Up, Installing, and Configuring 12c Virtualized RAC Clusters on OVM VirtualBox—Step-by-Step Setup and Configuration
• OEM 12c—Implementing Database as a Service (DBaaS)
It is advisable to follow the steps outlined in this chapter on your own laptop, particularly if you are interested in setting up a brand-new 12c cluster on your own machine.
OVM VirtualBox is a free, open-source virtualization product offering from Oracle that enables guest VM operating system (OS) virtualization on your own laptop or desktop machine. It can be utilized to install, configure, test, and learn Oracle RAC, alleviating the need for dedicated physical hardware and expensive physical shared storage. OVM VirtualBox can also be used for installation, configuration, and testing of various Oracle products, and it provides other virtualization applications. The latest version available at the time of writing is OVM VirtualBox 4.x.
OVM VirtualBox is a type 2 hypervisor—it installs on an existing preinstalled OS. It can be installed on the Linux, Macintosh, Solaris, and Windows OS families.
OVM VirtualBox can be downloaded from the Oracle Technology Network (OTN) website and can easily be installed by following the intuitive Installation Setup Wizard (see Figure 10.1). The entire process of downloading and installing OVM VirtualBox takes about 5 to 10 minutes.
Cloud computing means a lot of different things to a lot of different people. So what exactly is cloud computing?
In its most generally accepted form, cloud computing is web- or network-based computing wherein abstracted resources are located and shared on the network, whether on an intranet (private cloud) or the Internet (public cloud), presented in a service-based model. Cloud is sometimes used as an alternative term or metaphor for the Internet. By its generally agreed-upon definition, cloud computing is on-demand, metered, and self-serviceable.
Cloud computing is an evolution of existing IT paradigms, strategies, and models: in many respects, it is a rebranding, reorganization, and re-presentation of various components in the overall IT ecosystem. Cloud computing is in flux, not completely mature, and still evolving.
In other words, most cloud computing models do not introduce newer technologies but rather improve on the existing technologies by making them more efficient. Focusing on the subject at hand, RAC plays an integral role in setting up and configuring Oracle database clouds.
Is cloud computing a paradigm shift? The answer is yes and no, depending on your perception, understanding, and implementation of your flavor of cloud computing.
Is cloud computing seeing massive adoption? It is catching on, and the prospects are very promising. Cloud computing is also commonly understood as elastic computing, which is fundamentally attained by merit of virtualization. Elastic computing is the capability to provide increased computing resources when needed.
Here are a few salient characteristics of an IT cloud:
• Dynamic, elastic, agile, and scalable
• Multitenant, secure, and reliable
• Metered, service based
Four models of deployment are currently prevalent in use:
• Private cloud (or enterprise cloud): Characterized by clouds on private networks (may someday replace the traditional data-center term)
• Public cloud: Shared (typically virtualized) resources over the Internet
• Hybrid cloud: A combination of private and public cloud models
• Community cloud: Organizations forming a shared cloud for common needs, goals, and purposes
Cloud computing can be summarized with the phrase (and is widely understood as) “fill-in-the-blank as a service.” For example:
• Database as a service (DBaaS)
• Storage as a service
• Software as a service (SaaS)
• Middleware as a service (MWaaS)
• Platform as a service (PaaS)
• Infrastructure as a service (IaaS)
• IT as a service (the holy grail of cloud computing)
Oracle’s cloud computing strategy is a comprehensive one yet simple one—Oracle provides infrastructure, products, and support for public and private clouds.
Services are based primarily on subscription-based application as a service, IaaS, and PaaS paradigms. Some of Oracle’s current cloud offerings include products such as Fusion CRM/HCM, RAC/Database Cloud Service, Oracle Social Network, and Oracle Java Cloud Service.
To briefly summarize the components involved, with OVM for x86 and Enterprise Manager Cloud Control 12c (EM12c), you can comprehensively formulate, implement, administer, maintain, meter, and support private clouds behind your corporate firewalls.
EM12c incorporates the cloud functionality in cloud management packs: Cloud Management Pack for Oracle Database. We talk more about this component at the end of the chapter. We also provide an overview of DBaaS.
The virtualization component OVM for x86 3.x has been integrated into the framework of EM12c and works hand-in-hand with EM12c to implement cloud IaaS. Working as a proxy agent, with the OVM 3 agent, you do not need an additional EM12c agent deployed on your OVM for an x86 machine.
With cloud application programming interfaces (APIs) and command-line interfaces (CLIs), self-service operations, out-of-the-box scaling capabilities, policy-based resource management, governance and chargeback/metering, cloud zones, and more, EM12c provides a wide variety of feature-rich functionality for setting up, managing, supporting, and administering Oracle database cloud infrastructures.
After the preceding overview of the various technologies, paradigms, and terms involved, it is now time to start setting up your own virtualized RAC 12c database cloud on OVM VirtualBox. The next section begins this journey by outlining the software and hardware requirements.
The software requirements are straightforward. The hardware requirements are a little bit more complicated but not overly so.
As outlined in the following steps, setting up a virtualized RAC 12c cluster requires the following prerequisite software components:
• OVM VirtualBox 4.x: Download and install from the OTN website.
• Oracle Enterprise Linux x86_64 Release 6.x: Oracle Enterprise Linux (OEL) is Oracle’s version of the popular Red Hat Enterprise Linux platform.
As outlined in the following steps, setting up a virtualized RAC 12c cluster requires a modern desktop or laptop, preinstalled with a Windows, Linux, Macintosh, or Solaris OS. This machine will serve as the host OVM VirtualBox machine. Table 10.1 presents the minimum requirements and the specs of the laptop that was used to follow and implement the steps detailed in this chapter.
In this section we run through two alternative approaches to setting up virtualized Oracle RAC on OVM VirtualBox. Then we walk through setting up, installing, and configuring 12c virtualized RAC on OVM VirtualBox.
The following sections, “Step 10.1, Approach 1” and “Step 10.1, Approach 2,” are alternatives to each other: you can choose either one.
Approach 1 is simple. It involves one main step composed of a couple of substeps. An OVM VirtualBox appliance is a golden image of software, ready to go. The concept is similar to OVM templates discussed in Chapter 9. Download the prebuilt, preconfigured appliance for OEL 6.x from the OTN website and import it.
1. Choose Oracle VM VirtualBox → File → Import Appliance.
2. Select the .ova file for OEL 6.x and press the Import button (see Figure 10.2).
As you can see, this option is simple and easy.
If you’ve already used Step 10.1, Approach 1, you can skip Step 10.1, Approach 2.
The second option is a bit more involved. You begin by creating an OEL 6.x virtual machine for node 01 from and .ISO image. The first step of Approach 2 is to enter information about your new VirtualBox:
1. Choose Oracle VM VirtualBox → New, and then enter the following information:
• Name and OS
• Name: bsfracvx1 (substitute the name of your RAC-Node-01)
• Type: Linux
• Version: Oracle (64 bit)
• Memory size:
• 4 GB is the ideal/minimum size requirement. If your machine has 8 GB RAM, then 2.5 GB will suffice.
• Hard drive:
• Create a virtual hard drive now
• Hard drive file type:
• VirtualBox Disk Image (VDI)
• Storage on physical hard drive:
• Dynamically allocated
• File location and size:
• Specify the folder that will house the virtual hard drive file
• Virtual hard drive size: 35 GB
The finished product will look something like what is shown in Figure 10.3.
2. Download and install OEL 6.x, as shown in Figure 10.4.
3. As mentioned in Chapter 9, at this point, download the OEL 6.x ISO from the Oracle eDelivery website, attach it to the RAC-Node-01 virtual machine as a virtual CD/DVD drive, boot from it, and then set up and install OEL 6.x.
Now that you have the new virtual machine created, it is time to configure it the way you want it:
1. Select VM for RAC-Node-01 → Settings.
2. Enter the following (see Figure 10.5):
• System:
• Uncheck the Floppy option checkbox in the Boot Order option checkbox.
• Processor:
• Change the number of processors to qty: 2.
• Acceleration:
• Enable the Enable VT-x/AMD-v and Enabled Nested Paging checkbox options.
• Choose Network → Adapter 1 (see Figure 10.5):
• Check the Enable Network Adapter checkbox.
• Select Bridged Adapter as Attached to and the network interface card (NIC) on the system. In this case, it is the WiFi card on the Windows 8 laptop.
• Select Inter PRO/1000 MT Desktop... as the adapter type.
This virtual network interface card (VNIC) will serve as the public network interface for the RAC 12c.
3. Choose Network → Adapter 2 (see Figure 10.6). Make the same selections as shown in the previous step with one exception: this VNIC will be Attached to the Internal Network.
This VNIC will serve as the first NIC for the HAIP-enabled private cluster interconnect.
4. Choose Network → Adapter 3. Make the same selections as shown in the previous step.
This VNIC will serve as the second NIC for the HAIP-enabled private cluster interconnect.
In this step, you create, configure, and attach the shared virtual disks for the cluster.
1. Create the shared disks for the RAC 12c cluster:
• GRID1 ASM disk group:
Qty: 5
ASM disk size: 15GB
• DATA1 ASM disk group:
Qty: 3
ASM disk size: 20GB
• RECO1 ASM disk group:
Qty: 1
ASM disk size: 20GB
The shared disks are shown here:
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_grid01.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_grid02.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_grid03.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_grid04.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_grid05.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_data01.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_data02.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_data03.vdi
C:Program FilesOracleVirtualBoxVBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:TFMCloud12cSharedStorageasm_reco01.vdi
2. Make the file-based virtual hard disks shareable for the ASM disk groups. Substitute the shared virtual disk filenames in the following commands created as a result of the VBoxManage.exe createhd
commands in the previous section.
c:TFMCloud12cSharedStorageasm_grid01.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_grid02.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_grid03.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_grid04.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_grid05.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_data01.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_data02.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_data03.vdi --type shareable
VBoxManage modifyhd c:TFMCloud12cSharedStorageasm_reco01.vdi --type shareable
3. Attach the file-based virtual hard disks for the ASM disk groups to Node 1:
Substitute the shared virtual disk filenames in the following commands created as a result of the VBoxManage createhd commands in the previous sections.
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_grid01.vdi --type hdd --port 1 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_grid02.vdi --type hdd --port 2 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_grid03.vdi --type hdd --port 3 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_grid04.vdi --type hdd --port 4 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_grid05.vdi --type hdd --port 5 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_data01.vdi --type hdd --port 6 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_data02.vdi --type hdd --port 7 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_data03.vdi --type hdd --port 8 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:TFMCloud12cSharedStorageasm_reco01.vdi --type hdd --port 9 --device 0
--mtype shareable --storagectl "SATA"
4. Verify the attachment and shareable status of the virtual shared ASM disks in OVM VirtualBox by choosing OVM VirtualBox → bsfracvx1 (Node 01) → Settings → Storage (see Figure 10.7).
Power up the guest VM and configure it by implementing the following steps, which are outlined in Chapter 9:
1. Set the network configuration of the private cluster interconnect VNIC.
2. Modify the /etc/hosts file to include the relevant entries for RAC 12c.
3. Check for space requirements.
4. Disable the Linux software firewall.
5. Configure and restart the Network Time Protocol Daemon (NTPD) client.
6. Partition, format, and mount /u01 on the 25-GB local virtual hard disk.
7. Disable the SELINUX option.
8. Install Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP server).
9. Install X Window System desktop.
10. Reboot RAC-Node-01 for all of the preceding setups and configurations to take effect.
11. Perform Oracle software preinstallation steps on the RAC-Node-01 VM.
12. Create the required and relevant OS groups.
13. Create the oracle and grid OS users as the Oracle DB HOME software owners and grid infrastructure HOME software owners, respectively, and set their initial passwords.
14. Create the Optimal Flexible Architecture (OFA) directory structure for RAC 12c.
15. Observe and verify the required and relevant permissions of the created OFA directory structure.
16. Set up and configure the NTPD daemon.
17. Turn off and unconfigure the Avahi daemon.
18. Install packages and options for Linux kernel.
19. Create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups.
20. Verify the partition structures for the underlying disks within the GRID1, DATA1, and RECO1 ASM disk groups.
21. Configure ASMLIB ON RAC-Node-01.
22. Download and stage the Oracle software binaries.
At this point, the VM for RAC-Node-01 is ready to be cloned (illustrated in the next section).
Before cloning the virtual hard drive, make a backup copy of the cloned virtual hard drive—for example, bsfracvx1_localvhd.vmdk, on bsfracvx1—to establish a save point. This approach enables you to revert to a point-in-time saved copy of Node 01 so that if there are issues further down the road, you will not have to start over from scratch. However, this approach does translate into a larger space requirement on your desktop or laptop host machine.
Then follow these steps:
1. Run the clonehd
command to clone the hard drive for RAC-Node-01.
2. Shut down the VM for RAC-Node-01, create another directory for RAC-Node-02, such as C:Users fm1VirtualBox VMssfracv2, and run the VBoxManage.exe clonehd
command to clone the hard drive of Node 01.
C:Program FilesOracleVirtualBox VBoxManage.exe clonehd "C:Users fm1VirtualBox VMssfracvx1sfracvx1_localvhd.vdi" "C:Users fm1VirtualBox VMssfracvx2sfracvx2_localvhd.vdi"
Temporarily copy and relocate the cloned virtual hard drive for Node 02 to another temporary folder to avoid any errors during creation of the VM for RAC-Node-02, as outlined in the following section.
Follow these steps to create and configure the VM for RAC-Node-02.
1. Go to Oracle VM VirtualBox → Machine → New.
2. Follow the same instructions as outlined earlier (Step 10.1, Approach 1) to create and configure the VM for RAC-Node-02 with one exception: instead of creating a new HD, select and attach an existing virtual HD and specify the name of the cloned virtual HD file created in the previous section (see Figure 10.8).
3. Configure and customize RAC-Node-02 by following the same steps as outlined in Step 10.1, Approach 2:
• System
• Processor
• Acceleration
• Network settings (adapters 1, 2, and 3)
• Attach the shareable ASM virtual disks to RAC-Node-02 using the VBoxManage storageattach
command.
• Verify the attachment and shareable status of the virtual shared ASM disks within OVM VirtualBox.
4. Configure the network settings within the OS for RAC-Node-02.
5. Power up the VM for RAC-Node-02, and in the OS, edit the network settings as shown in the following:
• Change the hostname in /etc/sysconfig/network.
• Linux → Top Menu → System → Administration → Network:.
• Remove the System eth* VNICs: Select Device → Deactivate → Delete
• Change the connection names to match Node 01, from Auto eth* to System eth*.
• Modify the IP address information for the public and private network interfaces for RAC-Node-02.
• Modify the /etc/udev/rules.d/70-persistent-net.rules file to reflect the correct eth* entries:
$ vi /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.
# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:c3:58:84", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:06:ad:43", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:86:01:91", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"
• Modify the .bash_profile file to reflect the hostname and second instance on Node 02 (the following also serves as an example of a .bash_profile file for a RAC 12c cluster):
[oracle@bsfracvx2 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/usr/kerberos/sbin:/usr/local/sbin:/sbin:/root/bin
export PATH
ORACLE_TERM=xterm
export ORACLE_TERM
ORACLE_SID=racvxdb2
export ORACLE_SID
ORACLE_HOSTNAME=bsfracvx2.bsflocal.com
export ORACLE_HOSTNAME
ORACLE_UNQNAME=RACVXDB
export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/12.1.0/dbhome_1
export ORACLE_HOME
TMP=/tmp
export TMP
TMPDIR=$TMP
export TMPDIR
PATH=/usr/sbin:$PATH
export PATH
PATH=$ORACLE_HOME/bin:$PATH
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
if [ $USER = "grid" -o $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
Reboot Node 02. Power Node 01 and verify network connectivity between both nodes by pinging both public and private network interfaces between them.
Next, you need to enable X11 forwarding.
1. Install an X Window System display server, such as Xming, on the OS (in this case, Windows 8).
2. Enable X11 forwarding in your terminal emulator program, such as PuTTY (see Figure 10.9).
Start the guest VMs constituting the RAC and install and set up Oracle 12c grid infrastructure by following the steps outlined in Chapter 9:
1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches, or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option.
3. Select the Configure a Flex Cluster option.
4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS).
6. Enter the relevant information for the RAC 12c nodes including HUB and LEAF nodes.
7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes.
8. Specify the network interfaces for public, private cluster interconnect and ASM.
9. Select the Configure Grid Infrastructure Management Repository option.
10. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level.
11. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
12. Select the Do not use Intelligent Platform Management Interface (IPMI) option.
13. Specify the OS groups for ASM.
14. Enter the Oracle BASE and HOME locations.
15. Enter the Oracle inventory location.
16. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts.
17. Generate and run any runfixup.sh scripts to fix any prerequisite issues.
18. Press Install to initiate the installation process for grid infrastructure.
At this point, Oracle 12c grid infrastructure has been set up: we are at the same level of setup in the RAC 12c installation process as we were at the end of Chapter 9. From this point onward, the steps outlined in the following sections apply equally to both virtualization (or physical hardware) approaches: OVM for VirtualBox and OVM for x86.
The next two steps outline what is involved in installing and setting up EM12c agent software on the RAC 12c node virtual machines, which are used for monitoring the host machines and all the targets within them by EM12c. This section also assumes that you have an EM12c setup in place. If you are doing it all on your home laptop, then it is advisable to have EM12c installed on the host laptop itself. Step 10.10 details how to configure the Windows 8 firewall (if it is the underlying OS) for EM12c to communicate with the virtual machines for the RAC nodes.
Follow these steps to deploy/install the EM12c agents.
1. Choose OEM 12c → Top-Right Menu → Setup → Add Target → Add Targets Manually.
2. Add targets manually by selecting the Add Host Targets option. Press the Add Hosts button.
3. Set the host and platform by entering the hostnames and platform (Linux x86_x64) for the RAC nodes. Enter a session name to identify the job associated with the addition of the targets in EM12c.
4. Then tend to the additional installation details by entering the following information:
• The installation base directory, for example, /u01/app/oracle/agent12c
• The instance directory, for example, /u01/app/oracle/agent12c/agent_inst
• Named credentials for the oracle OS user
• The privileged delegation setting
• The port number
5. Now press the Deploy Agent button. Monitor the progress, as shown in Figure 10.10.
6. After the deployment process completes, run the root.sh script as the privileged root OS user.
7. Follow the process to promote all the non-host targets.
The default setup and functionality in Windows 8 does not allow a pingback; therefore, a custom inbound rule has to be configured to enable communication between the Windows host machine and the virtual machines for the RAC nodes. The following steps detail this process. For a non-Windows OS on the host machine, implement a similar process (if applicable) to enable communication between the EM12c host and the RAC node VMs.
1. Choose Start → Control Panel → Window Firewall → Advanced Settings → Inbound Rules.
2. Enable the File and Printer Sharing (Echo Request—ICMPv4-In) rule (see Figure 10.11).
This section details the steps for creating the ASM disk groups needed for the RAC 12c database(s).
1. Create the DATA1 ASM disk group using the ASM Configuration Assistant (ASMA):
[grid@bsfracvx1 bin]$ pwd
/u01/app/12.1.0/grid_1/bin
[grid@bsfracvx1 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid_1
[grid@bsfracvx1 bin]$ ./asmca
2. Press the Create button.
3. Choose the appropriate level of ASM disk group redundancy—External in this case (see Figure 10.12).
4. Select the ASM disk group member disks.
5. Enter the values for the ASM disk group compatibility parameters.
6. Set the allocation unit size to 4 MB.
7. Repeat the preceding steps for the RECO1 ASM disk group.
The finished product is shown in Figure 10.13.
This section contains the steps to install the RAC database software into non-shared database homes. Implementing non-shared Oracle Homes is best practice, as it allows rolling patches to be applied to the RAC without the need to bring the entire cluster down for maintenance purposes.
1. Run the Oracle Universal Installer (OUI) from the RAC 12c Database staging area directory:
[oracle@bsfracvx1 database]$ pwd
/home/oracle/software/Ora12c/Database/database
[oracle@bsfracvx1 database]$ ./runInstaller
2. Enter the following information and make the following selections in the Wizard Entry screens of the OUI, as shown in the following screenshots. In certain cases, you’ll need to modify the entries according to the specific needs of your organization:
a. Enter the MOS credentials for support on software updates and patches, or choose to skip them (see Figure 10.14).
b. Select the Install database software only option, as shown in Figure 10.15.
c. Select Oracle Real Application Clusters database installation (see Figure 10.16).
d. Select the RAC nodes on which the installation is to be performed (see Figure 10.17).
e. Click SSH Connectivity to ensure that it is established.
f. Select the appropriate product language(s).
g. Select the Database edition, in this case, Enterprise Edition (see Figure 10.18).
h. Specify the Oracle base and home locations (see Figure 10.19).
i. Specify the Oracle OS groups for the various job roles: OSDBA, OSOPER, OSBACKUPDBA, OSDGDBA, OSKMDBA (see Figure 10.20).
j. The warnings and errors are shown as part of the Verification Results in the next screen (see Figure 10.21).
k. Click Install to initiate the installation process for DB HOME software on the RAC nodes, as shown in Figure 10.22.
3. As shown in Figure 10.23, after the installation completes, run the root.sh script on all the RAC nodes, as the privileged OS root user.
[root@bsfracvx1 dbhome_1]# ./root.sh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
4. Enter the full pathname of the local bin directory:
[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as the generic part of the root script runs.
Next, product-specific root actions will be performed.
At this point, we are ready to create and install a clustered RAC 12c database using the DBCA utility. DBCA is considered the best-practice database creation/setup tool because it enables the user to create a RAC (or non-RAC) database, complete with a whole host of industry-standard best practices built into it, along with other database management options as well. This section contains the steps to do so.
1. As the oracle OS user, run the DBCA utility from the RAC 12c DB HOME:
[oracle@bsfrac01 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/bin
[oracle@bsfrac01 bin]$ ./dbca
2. Enter the information in the following steps, and make the selections in the Wizard Entry screens of DBCA, as shown in the figures. In certain cases, you’ll need to modify the entries according to the specific needs of your organization.
3. Select the Create Database option (see Figure 10.24).
4. Select the Advanced Mode option (see Figure 10.25).
5. Select the Oracle Real Application Clusters (RAC) database and Policy-Managed options (see Figure 10.26).
6. Select the General Purpose or Transaction Processing option (see Figure 10.26). Press the Show Details button on this screen.
7. Enter the global database name (see Figure 10.27).
8. Enable the Create As Container Database checkbox (see Figure 10.27).
9. Specify the appropriate options for the container database(s): number of pluggable databases, PDB name prefix (see Figure 10.27).
10. Enter the server pool information for the policy-managed RAC DB: server pool name, cardinality, and existing or new server pool (see Figure 10.28).
11. Specify the management options by registering the database with EM12c.
12. Enter and verify the passwords for the database users: SYS, SYSTEM, PDBADMIN, DBSNMP (see Figure 10.29).
13. Enter the ASM disk groups for the data file locations (see Figure 10.30).
14. Enable the Archiving option for the online Redo log files and enter the parameters for archiving (see Figure 10.30).
15. Press the Multiplex Redo Logs and Control Files button (see Figure 10.30), and enter the locations for the multiplexed files (see Figure 10.31).
16. Enter the parameters for the Fast Recovery Area (FRA) (refer to Figure 10.30).
17. Enter the parameters for sample schemas, custom scripts, database vault, and label security.
18. Enter the parameters for database memory management (see Figure 10.32), sizing (see Figure 10.33), connection mode (see Figure 10.34), and character sets (see Figure 10.35).
19. Press the All Initialization Parameters button, and then the Show Advanced Parameter button. Modify the RAC DB initialization parameters as needed (see Figure 10.36).
20. Press the Customize Storage Locations button. Modify the parameters for control files (see Figure 10.37) and data files. Redo log groups and files as needed.
21. Enable the Create Database and Generate Database Creation Scripts checkbox options, and enter the location of the generated scripts’ destination directory (see Figure 10.38).
22. You may see a warning message about memory/SWAP sizes: check the Ignore All checkbox. and then press the Next button (see Figure 10.39).
23. Press the Finish button to initiate the installation process for the new RAC 12c database on the RAC nodes. When the process completes, you will see a dialog box like the one shown in Figure 10.40.
Run the following commands in SQL*Plus to perform basic sanity checks on the new RAC 12c database.
SQL> select instance_name,status from gv$instance;
INSTANCE_NAME STATUS
---------------- ------------
bsfrvxdb_1 OPEN
bsfrvxdb_2 OPEN
[grid@bsfracvx1 bin]$ ./lsnrctl status
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 20-MAR-2013 03:55:50
Copyright © 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start date 19-APR-2013 18:23:25
Uptime 0 days 9 hr. 32 min. 27 sec
Trace level OFF
Security ON: Local OS Authentication
SNMP OFF
Listener parameter file /u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener log file /u01/app/grid/diag/tnslsnr/bsfracvx1/listener/alert/log.xml
Listening endpoints summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.116)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.160)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=bsfracvx1)(PORT=5500))(Security
=(my _wallet_directory=/u01/app/oracle/product/12.1.0/dbhome_1/admin/
bsfrvxdb/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+APX" has 1 instance(s).
Instance "+APX1", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
Instance "-MGMTDB", status READY, has 2 handler(s) for this service...
Service "bsfrvxdb" has 1 instance(s).
Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
Service "bsfrvxdbXDB" has 1 instance(s).
Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
Service "bsfrvxpdb" has 1 instance(s).
Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
The command completed successfully
[root@bsfracvx1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora....SM.lsnr ora....er.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora.DATA1.dg ora....up.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora.GRID1.dg ora....up.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfracvx2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfracvx1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfracvx1
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE bsfracvx1
ora.RECO1.dg ora....up.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora.asm ora.asm.type 0/5 0/0 ONLINE ONLINE bsfracvx1
ora....X1.lsnr application 0/5 0/0 ONLINE ONLINE bsfracvx1
ora....vx1.ons application 0/3 0/0 ONLINE ONLINE bsfracvx1
ora....vx1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfracvx1
ora....X2.lsnr application 0/5 0/0 ONLINE ONLINE bsfracvx2
ora....vx2.ons application 0/3 0/0 ONLINE ONLINE bsfracvx2
ora....vx2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfracvx2
ora.bsfrvdb.db ora....se.type 0/2 0/1 ONLINE ONLINE bsfracvx2
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE bsfracvx1
ora.gns ora.gns.type 0/5 0/0 ONLINE ONLINE bsfracvx1
ora.gns.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfracvx1
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE bsfracvx1
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE bsfracvx1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE bsfracvx1
ora.proxy_advm ora....vm.type 0/5 0/ ONLINE ONLINE bsfracvx1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfracvx2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfracvx1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfracvx1
[root@bsfracvx1 bin]# ps -ef | grep pmon
grid 4398 1 0 21:11 ? 00:00:02 asm_pmon_+ASM1
grid 4886 1 0 21:13 ? 00:00:01 mdb_pmon_-MGMTDB
oracle 5732 1 0 21:16 ? 00:00:02 ora_pmon_bsfrvdb_2
grid 6138 1 0 21:18 ? 00:00:01 apx_pmon_+APX1
root 11718 11035 0 21:39 pts/0 00:00:00 grep pmon
Congratulations! The setup of your own virtualized RAC 12c cluster is now complete, and it is fully functional and up and running. The steps for creating clusters on physical resources are very similar, if not identical, to the steps outlined in this and the previous chapter.
Now that we have learned how to create RAC 12c clusters from scratch, let us focus on the cloud management piece of the picture.
As discussed in Chapter 9, the following components are needed for setting up virtualized Oracle database clouds:
• OVM for x86
• EM12c
• Oracle Cloud Management Pack for Oracle Database
EM12c is the nerve center of cloud computing for Oracle products in general, including the Oracle database server family. The Cloud Management Pack for Oracle Databases provides the features, options, and framework to set up, configure, monitor, meter, account for, and chargeback Oracle database clouds, including self-service capability for Oracle databases (see Figure 10.41).
Following are some salient feature, and benefits of setting up a database cloud using EM12c:
• Elasticity on demand enable by rapid and agile provisioning of database resources
• End-to-end management of the database cloud life-cycle process
• Self-service access for cloud consumers
• Definition of the service catalog and publishing of templates
• Pooling of cloud resources
• Performance monitoring of cloud databases
• Easy power-up, power-down, and retirement of cloud databases
• Role-based security implementation
• Accounting through chargeback and metering of cloud resources
The process of setting up and configuring DBaaS in EM12c is beyond the scope of this book: you are encouraged to read the Oracle documentation on how to set up DBaaS in EM12c in order to complete the Oracle database cloud computing picture.
Following are some thought-provoking points and questions (as well as some answers) that we (the DBAs) should ask ourselves:
• Who are the end consumers and owners of the IT hardware?
Yes, the business implicitly owns everything, but from the administration, maintenance, support, and ownership standpoints, the answer is that we, the Oracle DBAs, are the end consumers of the machines.
• What if we put OVM directly into the control of Oracle DBAs?
No more waiting for system administrators, but of course, that would mean adding to your skills set.
• What if we could have the power of agile elasticity to set up and remove machines in our own hands?
We could rapidly prototype new environments without having to wait for and depend on the OS and sysadmin folks.
• Is super-rapid provisioning of new infrastructures really possible with virtualization? That sounds like someone is blowing a lot of hot air—right?
WRONG.
• Virtualization in production? No way is that going to happen on my watch: it would be an overhead and a performance nightmare—right?
WRONG.
• Multi-tenant virtualization: That would present security risks; guest VMs would not be isolated and secure enough—right?
WRONG.
• In order to implement my own private cloud using OVM, I would have to learn so much—right?
WRONG (OVM along with EM12c is ultra-easy to learn and implement; you can set up your entire virtualized infrastructure within a few hours; the real fun and productivity start after that.)
• Carrying the burden of legacy infrastructures, my professional back hurts on a daily basis. When will I get my hands on new machines that I have been promised by the OS and sysadmin folks for a while now?
Virtualization and cloud computing together are the consolidated answer to all of the above—unprecedented productivity, throughput, and resource efficiency, which quite simply are just not possible in the physical, non-cloud universe. Oracle RAC 12c is the database cloud and enables you to complete the overall corporate cloud picture.
As emphasized in this chapter and in Chapter 9, the significance of cloud computing can no longer be overlooked or ignored in the modern-day IT workplace. Whether it is your own private cloud behind the corporate firewall, a subscription-based model in a public cloud, or a hybrid of both paradigms, cloud computing is an inevitability that is happening now in the IT universe. This and the previous chapter presented a detailed, step-by-step way to build your own virtualized RAC 12c clouds, managed and integrated under the umbrella of EM12c. An overview of the various paradigms, technologies, and products involved in setting up virtualized RAC database clouds was also presented. You can choose one of two virtualization options: OVM for x86 or OVM VirtualBox. The latter option allows you to set up, configure, and learn RAC 12c comfortably and conveniently on your own home laptop or desktop machine.