Cloud computing promises to usher in a new era for the corporate IT universe. Every day, we hear that the cloud within the typical IT organization is inevitable and bound to happen, if not already present in some form; it is reasonable to conclude that cloud computing is only a matter of when, not if.
The cloud computing winds of change have been blowing for quite a few years now, recently picking up momentum at Oracle Corporation—almost every key Oracle product focuses on cloud computing as a paradigm. This focus is evident in the “c” (for “cloud”) that is appended to the current release number of Oracle products. Real Application Clusters (RAC) is no exception to Oracle’s turn toward cloud computing.
Virtualization is the foundation of cloud computing because it is widely implemented today. What is virtualization, and what part does it play in the cloud universe? This chapter and the next insights, recommendations, and a step-by-step guide on setting up virtualized RACs, with an emphasis on virtualization, cloud computing, Oracle Virtual Machine (OVM) for x86, and Oracle Enterprise Manager Cloud Control 12c (EM12c). These virtualized RACs can then be utilized within the framework of database as a service (DBaaS) for rapid and easy deployment as database cloud services.
The overlap between the material in this and other chapters is intended to reiterate important concepts as well as to present the topics in the proper context.
Following is a summary of topics presented in this chapter:
• Building Oracle database clouds: The necessary ingredients
• What is virtualization?
• What are virtual machine (VM) monitors (hypervisors)?
• Types of hypervisors
• Types of virtualization
• Oracle VM for x86—360 degrees
• Xen—Synopsis and overview
• Oracle VM—Overview and architecture
• Oracle VM templates—Synopsis and overview
• Oracle VM 3.x—A brief introduction
• Setting up virtualized Oracle RAC clusters using Oracle VM: Alternative approaches
• Set up, install, and configure 12c virtualized RAC clusters: Step-by-step setup and configuration
This chapter guides you, step by step, through installing, setting up, and configuring a virtualized RAC 12c using OVM for x86. The next chapter takes a similar approach, with one major difference—the underlying virtualization technology (hypervisor) is Oracle VirtualBox instead of OVM for x86. This information gives you the choice of using either virtualization technology or both technologies to set up virtualized Oracle RAC 12c database clouds. An overview of cloud computing and the role and relevance of virtualization from the perspective of cloud computing are also covered in both chapters. All respective versions of the hypervisors used are the latest and greatest at the time of the publication of this book.
Cloud computing can be described as “fill-in-the-blank as a service”: for example, infrastructure as a service (IaaS), platform as a service (PaaS), and database as a service. A more detailed overview of cloud computing, its various flavors, paradigms, prevalent trends, and a whole lot more are presented in the next chapter.
How do we plan for, set up, build, and configure Oracle database clouds? The short answer is OVM for x86, EM12c, and RAC. Together they make up the true database cloud solution from Oracle, especially if you are planning your own private database clouds behind your corporate firewalls. OVM for x86 is used interchangeably with OVM in this chapter and the next.
An overview of virtualization is presented in this chapter with a follow-up section on cloud computing in the next chapter.
Virtualization is the opposite of a physical entity in the IT universe. Here are some salient features and key points about virtualization:
• Virtualization is the foundation stone in the cloud computing era.
• Virtualization is an inevitability waiting to happen in the IT universe, one that you just can’t avoid: the sooner you embrace it, the better off you are.
• Virtualization can be summarized as an abstraction layer.
• Virtualization has proved to be a game-changer, resulting in unprecedented server utilization.
• Virtualization enables agile availability of resources to the end user, thereby shaving considerable time from the IT provisioning life cycle.
• Virtualization in the modern day can be characterized as the gateway and roadmap to secure and elastic corporate IT scalability.
• Virtualization implies a fantastic alternative to physical reality—the possibilities are endless.
• The alternative to virtualization consists of physical hosts with a lot of useless spare capacity, resulting in many resources being underutilized.
• Although Oracle database administrators (DBAs) were slow to uptake virtualizing their databases, the trend has finally gained momentum and reached critical mass.
A VM monitor, also known as a hypervisor, enables OS kernels to run and coexist as guests, thereby enabling virtualization at the OS level. Hypervisors are responsible for allocation and coordination of CPU, memory, I/O, peripheral resources, and so on, to the guest VMs.
There are two types of hypervisor:
• Type 1: This type is known as a native, or more commonly, bare-metal hypervisor. It installs on bare-metal hardware and does not require an OS on which to be installed. Examples are VMware ESX/vSphere, Microsoft HyperV, Xen, and OVM. Bare-metal hypervisors are enterprise-grade hypervisors that enable cloud computing as it is widely known and understood today.
• Type 2: This type is known as a hosted hypervisor and is installed on an already existing OS on the system: examples are OVM VirtualBox, VMware Server, and VMware Workstation. Hosted hypervisors are mostly utilized for personal use, for example, learning new technologies and colocating various OS families on your laptop.
Here are some key points and salient features about hypervisors:
• A hypervisor is at the lowest level of the stack from a technology standpoint.
• A hypervisor enables agility and rapid deployment of resources within the IT space.
• Hypervisors result in increased efficiency by merit of elastic resource consolidation.
Following are some benefits and advantages of implementing hypervisors:
• Increased resource utilization
• Fault tolerance and high availability
• Isolation and multitenant support
• Support for a wide range of popular OS families
There are three types of virtualization prevalent in the industry today (the first two categories are explained in the following sections, as they are relevant to this chapter):
• Paravirtualization
• Hardware-assisted/full virtualization
• Partial virtualization
In paravirtualization, guest VMs use a special hypercall application binary interface (ABI) in a modified OS for performance and simplicity. The modified OS communicates with the hypervisor, and tasks are relocated from the virtual domain to the host domain.
OVM implements this type of virtualization. The Oracle/Red Hat Enterprise Linux family of paravirtualized guests are supported with OVM as paravirtualized guests.
Paravirtualization is generally relatively faster than hardware virtualization. This is not to imply that either type of virtualization is either slow or not fast enough.
Hardware-assisted virtualization is also known as full or native virtualization and requires CPU support.
This type of virtualization enables unmodified guest OS kernels to run within a simulated hardware infrastructure but generally is relatively slower than paravirtualization.
Microsoft Windows and Oracle Solaris families of hardware/full virtualized guests are supported with paravirtualized drivers on OVM.
OVM for x86 is a type1 hypervisor based on Xen, the de facto nth-generation open-source hypervisor. Xen is a mainstream technology, widely used by dominant cloud computing providers such as Amazon and Rackspace, as well as by Oracle’s own public cloud. OVM provides both server virtualization and management components. OVM 3.x is based on Xen 4.x and has been significantly enhanced to be an industrial-grade product capable of configuring, administering, managing, and supporting thousands of servers hosting both Oracle and non-Oracle applications. Some of the advances in this relatively new version include dynamic resource scheduling (DRS), high availability–enabled server pools (clusters), and dynamic power management. OVM is augmented with the Virtual Assembly Builder and Template Builder components, which combine to form a complete virtualization picture within the OVM family.
Following are some of the key points about OVM’s capabilities and some of its advantages. However, as with any technology, OVM has its fair share of nuances, most of which can be taken care of by proper configuration and by following implementation best practices.
• Server load-balancing
• Centralized network and storage management
• Physical to virtual (P2V) and virtual to virtual (V2V) conversion
• Web services API
• Support for Windows, Linux, and Solaris as guest OS families
• Agility and fast deployment with OVM templates and Oracle Virtual Assembly Builder
• Web-based GUI management
• OVM zones—multiple server and storage pools
• High availability and live migration with OVM server pools
• Running mixed heterogeneous workloads within a single consolidated machine
• Very fast—delivers near-native performance
• Simple and easy installation—low learning curve
Another nice point is that OVM is free—you pay only for affordable, low-cost support.
OVM is the only virtualization offering for the x86 architecture that is certified with all major Oracle products.
Xen originated at Cambridge University and is the leading open-source, industry-standard hypervisor. Ian Pratt founded XenSource, the company behind Xen, which was later acquired by Citrix in 2007. Xen 4.x is the latest version as well as the underlying version for OVM 3.x.
The Xen hypervisor is the virtualization base of Amazon EC2, the market leader in the cloud computing IaaS service model. Oracle is part of the Xen Advisory Board and contributes to its development. Other members of the Xen Advisory Board include Citrix, Hewlett Packard, IBM, Intel, Novell, Oracle, and Red Hat.
OVM is made up of two components:
• OVM Server, the Xen-based open source hypervisor component
• OVM Manager, the Java-based thin-client GUI management component
OVM server is the actual hypervisor component based on Xen. It installs on bare-metal x86 hardware and does not require a preinstalled OS.
OVM boots a small 64-bit domain called DOM0, which is used for assigning, distributing, and coordinating CPU, I/O, and other resources. Guest VMs are created and configured as in DOMus.
Based on WebLogic server, OVM Manager is a Java-based management server component with a Web-based UI. It utilizes Oracle Database as a management repository and comes prepackaged with a free XE version of Oracle Database, which can be converted to all other flavors of the Oracle Database server family.
Recently, with OVM 3.2.x, MySQL is now also supported as a repository database option. OVM agent processes are used on each OVM server for communication and management purposes. OVM uses server pools (or clusters) to group virtualization resources: each server pool encompasses one or more OVM servers.
OVM templates, or Golden Images, are factory-packaged, preinstalled, and preconfigured images of preconfigured VMs containing software products that are complete with built-in best practices and are ready to go. They provide reusability and full-stack implementation. All major Oracle products—for example, Oracle Database server, Fusion Middleware, Enterprise Linux, and RAC—are available as OVM templates.
OVM templates are the vehicles to significant reduction of installation and configuration costs in the IT landscape.
The following methods can be employed/deployed to create OVM templates:
• P2V conversion
• Create VM templates from existing VM images
• Create VM templates from just enough operating system (JeOS)
OVM Assembly Builder gives you the capability of a structured process for appliance consolidation into cohesive and reusable assemblies by rapidly creating and configuring full-stack topologies and provisioning them onto virtualized appliances.
OVM Builder is used for creating dedicated VMs called software appliances and facilitates deployment of the entire application as a single, automatically configured unit. This tool can facilitate the building of private clouds significantly by building VM assemblies and deploying OVM templates.
OVM 3.x, the latest release, takes scalability to a whole new level. With tons of new features, OVM 3.3 is based on Xen 4.x.
Being highly scalable, this latest version of OVM includes many enhancements:
• A feature-rich Web-based UI, improved backup and recovery capability
• Simplified VM deployment, administration, and management with 64-bit DOM0
• Application-driven virtualization, up to 128 virtual CPUs, and 1 TB memory per guest VM
• Jobs-based VM operations
• Dynamic resource management
• Dynamic power management
• Comprehensive network and storage management
• Multiple-template cloning in a single step
• Over 100 factory-packaged best-practices built into OVM templates
• A centralized configuration and management solution in the form of OVM Manager.
In other words, 3.x is truly an enterprise-grade, groundbreaking release. OVM 3.x is completely and fully managed by a browser-based UI provided by OVM Manager.
If you haven’t already embarked on this journey, now is a great time to upgrade and migrate your OVM infrastructures from 2.x to 3.x.
OVM provides broad-based high availability across the virtualization ecosystem in the form of high availability–enabled server pools (or clusters) on shared storage.
Salient features include:
• Live migration of guest VMs
• Automatic failover/restart of guest VMs in case of server failure
• Oracle Cluster File System 2 (OCFS2)—high availability on a cluster file system
• Server pool load balancing—using a best-fit algorithm, places guest VMs on the most appropriately loaded VM server
• Clustered OVM Manager
This approach is the easiest and fastest way to set up your own virtualized RAC database clusters as part of virtualized Oracle RAC database clouds. Simply download the OVM for x86 templates for RAC, install them, and in less than an hour, you have your own virtualized RAC up and running. This methodology is truly revolutionary and illustrates the beauty and power of agile provisioning of complex infrastructures and applications in cloud environments using virtualized templates.
While this approach is not covered in complete detail, the main utility used to set up, configure, and deploy a virtualized RAC from OVM templates, DeployCluster, is presented in the following section.
This section walks you through using the DeployCluster tool to rapidly configure and deploy a virtualized RAC database cluster. Listing 9.1 shows the example run.
[root@bsfmgr01 deploycluster]# ./deploycluster.py -u admin -p password -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
Oracle RAC OneCommand (v1.1.2) for Oracle VM - deploy cluster -
(c) 2011-2012 Oracle Corporation
(com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1200:v1.1.2) - v2.6.6 -
bsfmgr01.bsflocal.com (x86_64)
Invoked as root at Sat Sep 22 20:10:04 2012 (size: 37600, mtime: Sun Aug 5 12:37:58 2012)
Using: ./deploycluster.py -u admin -p **** -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
INFO: Attempting to connect to Oracle VM Manager...
INFO: Oracle VM Client (3.1.1.399) protocol (1.8) CONNECTED (tcp) to
Oracle VM Manager (3.1.1.305) protocol (1.8) IP (192.168.1.51) UUID
(0004fb0000010000da73c3bcce15ca2e)
INFO: Inspecting /home/oracle/ovm3/deploycluster/bsfrac64.ini for number of nodes defined....
INFO: Detected 2 nodes in: /home/oracle/ovm3/deploycluster/bsfrac64.ini
INFO: Located a total of (2) VMs;
2 VMs with a simple name of: ['bsfracovm1', 'bsfracovm2']
INFO: Verifying all (2) VMs are in Running state
INFO: VM with a simple name of "bsfracovm1" is in Running state...
INFO: VM with a simple name of "bsfracovm2" is in Running state...
INFO: Detected that all (2) VMs specified on command have (5) common shared disks
between them (ASM_MIN_DISKS=5)
INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details
as follows:
netconfig.ini (Network setup): /home/oracle/ovm3/deploycluster/bsfrac64.ini
buildcluster: yes
INFO: Starting to send cluster details to all (2) VM(s).....
INFO: Sending to VM with a simple name of "bsfracovm1"....
INFO: Sending to VM with a simple name of "bsfracovm2"......
INFO: Cluster details sent to (2) VMs...
Check log (default location /u01/racovm/buildcluster.log) on build VM (bsfracovm1)...
INFO: deploycluster.py completed successfully at 20:10:19 in 15.7 seconds (00m:15s)
Logfile at: /home/oracle/ovm3/deploycluster/deploycluster2.log
Figure 9.1 and Figure 9.2 each show parts of a sample run of the DeployCluster tool. On your monitor display, INFO:
(Figure 9.1) and [ OK ]
(Figure 9.2) should be green: all green means all good to go!
At the time of writing, OVM for x86 templates for RAC were only available for up to version 11gR2 and not for 12c. This is the recommended approach for setting up Oracle RAC as database clouds; however, because of the absence of OVM templates for 12c, we have included the longer alternative approach outlined in the next section. The other rationale for including this approach is that it enables you to learn the specific actions required to set up and configure RAC 12c from scratch.
This section takes you through an alternative, step-by-step approach to setting up your own virtualized RAC 12c in OVM for x86.
This chapter assumes that you already have an OVM 3.x server pool in an up-and-running state. If this is not the case, please refer to the OVM documentation to set up OVM 3.x. The following sections assume that you are familiar with basic RAC concepts (presented in earlier chapters). Also, this chapter and the next chapter are structured in a way that enables you to set up RAC database clouds in the comfort of your own home for learning purposes. Please note that the steps are identical to corporate RAC setups; however, the infrastructure is pared down to enable you to make use of hardware available at home.
The following hardware and software were used for setting up the OVM server pool for this example:
• OVM Manager and EM12c:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM
• OVM servers for server pool:
• (Qty: 3) 64-bit Intel x86 machines with 16 GB RAM each
• Shared storage:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM:
• Openfiler with 1 TB disk space available on iSCSI
Following are the high-level steps to build your own virtualized RAC–based database cloud:
1. Set up and configure the required hardware and software for an OVM server pool on shared storage.
2. Prepare and plan—Do your homework.
3. Install and set up grid infrastructure.
4. Install and set up non-shared database home(s).
5. Create a RAC database.
6. Configure and set up the RAC database as a monitored target in EM12c.
All of the preceding steps are detailed, elaborated on, and/or executed in the following sections of this chapter and the next one, with several alternative options presented for some of the involved steps.
While all of the following steps apply equally to corporate environments, they are written in such a way that you can set up a virtualized database cloud environment in your home, thereby learning how to install, set up, configure, and monitor RAC with minimal hardware.
Ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup.
This chapter assumes that you already have a basic server pool in OVM for x86 complete with OVM Manager 3.x and EM12c release 2 set up, configured, and ready to go for deploying an Oracle RAC 12c cluster (the hardware/software configuration used in this chapter and the next one is outlined in the preceding section). In case you need help with this process, be assured that this is a simple and easy process with a minimum, low, and intuitive learning curve if you follow the appropriate OVM for x86 installation and setup documentation manuals.
The following sections detail the steps involved in configuring OVM for RAC 12c.
Press the Push to All Servers button for Network Time Protocol (NTP).
Continue with the following steps:
1. Choose OVM Manager → Networking → Networks → Create (+) Button.
2. Select the Create a Hybrid Network with Bonds/Ports and VLANS option.
3. Enter the name and description of the OVM network. Select the Virtual Machine option.
4. Select the relevant ports.
5. Select the relevant VLAN segments.
6. Select the appropriate IP addressing scheme. Enter the IP addresses, net masks, and bonding options if applicable.
As shown in Figure 9.3, the new OVM network has been successfully created and is ready to be deployed and used.
To create the disks, follow these steps:
1. Choose OVM Manager → Repositories → Select OVS Repository → Virtual Disks → Create (+) Button:
2. Create a virtual disk with the following options:
• Size: 15 GB
• Allocation type: Sparse allocation
• Shareable: Yes
3. Repeat the preceding process for all five GRID1 Automatic Storage Management (ASM) disks.
For production environments, it is highly recommended to have physical block devices presented as virtual disks for the various ASM disk groups.
As shown in Figure 9.4, all of the virtualized shareable ASM disks for the GRID1 disk groups have been created and are now ready for use.
Step 9.5 has two alternative approaches, both of which are explained in quite a bit of detail next. Each step has further substeps, which are illustrated as well.
Step 9.5, Approach 1, is illustrated in the following substeps.
To create the Oracle Enterprise Linux (OEL) 6.x VM for RAC using an ISO boot image, follow these steps (as shown in Figure 9.5):
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine Button.
2. Select the Create a New VM option.
3. As shown in Figure 9.5, select and enter the following options for the RAC-Node-01 VM:
• Server pool
• Server pool repository
• VM description
• High Availability: Unchecked
• Operating system: Oracle Linux 6
• Domain type: Xen PVM
• Start policy: Start on best server
• Memory (MB): 2,048 (minimum required is 4,096. However, in cases where you are short on physical memory and are building the RAC for learning purposes, this can suffice)
• Max memory (MB): 8,192
• Processors: 2
• Max processors: 4
• Priority: 50
• Processor cap %: 100
4. Select Network and then press the Add VNIC button twice to create two virtual network interface cards (VNICs) for the RAC-Node-01 VM (see Figure 9.6).
5. Choose Next to move on to Setup Networks and Arrange Disks.
6. Select and enter the following options for the VM disks (see Figure 9.7):
• CD/DVD
• Virtual disk—Press the Create (+) button.
7. Select the imported ISO for Linux 6.x.
8. Select and enter the following options:
• Repository
• Virtual disk name
• Description
• Shareable: Unchecked
• Size: 25GB
• Allocation type: sparse allocation
9. Select the ISO for OEL 6.x (see Figure 9.8).
10. As shown in Figure 9.9, repeat the preceding process to add/select the following disks:
• ISODATA1 ASM disk group:
• Qty: 6
• Individual disk size: 50GB
• RECO1 ASM disk group:
• Qty: 1
• Individual disk size: 50GB
Select the Disk boot option. Press the Finish button to create the RAC-Node-01 VM.
To import the OEL 6.x x86-64 ISO image into the OVM repository, follow these steps:
1. Go to OVM Manager → Repositories → Select OVM Repository → ISOs → Import ISO Button.
2. Select and enter the following (see Figure 9.10):
• Server
• ISO download location: ftp://oracle:[email protected]/software/OEL63_x86_64/V33411-01.iso (Replace the IP address, username, and password with your own.)
Ensure that the Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP service) is set up correctly and that the ISO is available at the desired location and has the correct permissions.
Note that the status of the import process shows as In Progress, with a message showing Download Virtual CDROM....
Monitor the progress of the ISO import process in an SSH session to one of the Oracle VM servers (OVS) to which the OVS repository is connected.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 276480
-rw-r--r-- 1 root root 282066944 Feb 24 12:57 0004fb0000150000ba1fd09b4e2bd98c.iso
Keep checking periodically after brief intervals to monitor the progress of the ISO import process.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 2890752
-rw-r--r-- 1 root root 2959081472 Feb 24 13:04 0004fb0000150000ba1fd09b4e2bd98c.iso
At this point, the OEL 6.x ISO has been successfully imported (see Figure 9.11). Start up the VM, boot it from the OEL 6.x ISO, and go through the steps of setting up and installing Oracle Enterprise Linux 6.x.
Step 9.5, Approach 2, is illustrated in the following sub-steps.
To create the VM for RAC Server Node using an OVM OEL 6.x template, follow these steps:
1. Download the OEL 6.x OVM 3.x template from https://edelivery.oracle.com/linux.
2. Unzip the ISO and make it available via FTP.
3. Go to OVM Manager → Repositories → Select OVM Repository → Assemblies → Import VM Assembly Button (see Figure 9.12). Enter the following:
• Server
• VM assembly download location
Note that the status of the VM assembly import process shows as In Progress with a message showing “Downloading Assembly...” and then another one showing “Unpacking Template....”
Monitor the progress of the VM assembly import process within an SSH session to one of the OVS servers to which the OVS repository is connected.
[root@bsfovs03 11941bfbbc]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/Assemblies/11941bfbbc
[root@bsfovs03 11941bfbbc]# ls -l
total 617472
drwxr-xr-x 2 root root 3896 Feb 24 15:20 imports
-rw-r--r-- 1 root root 631541760 Oct 10 16:17 package.ova
drwxr-xr-x 2 root root 3896 Feb 24 15:21 unpacked
Keep checking periodically after brief intervals.
As shown in Figure 9.13, the OVM assembly for OEL 6.3 x86-64 has been imported successfully and is now ready for use.
To create the OEL 6.3 x86-64-PVM OVM template from the newly created assembly into the OVM repository,
1. Choose OVM Manager → Repositories → Select OVM Repository → VM Assemblies → Select VM Assembly → Create VM template.
2. Enter and select the following (see Figure 9.14):
• Assembly VMs
• VM template name
• Description
3. As shown in Figure 9.15, the OEL 6.3 x86_64 OVM template has been successfully created and is now ready for deployment.
To edit the newly created OEL 6.3 x86-64-PVM OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template → Edit.
2. Modify the following options as shown in the following screens:
• Max memory (MB): 8,192
• Memory (MB): 4,096
• Max processors: 8
• Processors: 4
• Enable High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Networks: Add/specify the appropriate network(s)
• Boot order: Disk
• Virtual disks:
• Add virtual disk for Oracle software binaries: 25GB
To create a clone customizer for the RAC node OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template.
2. Press the Create Clone Customizer button.
3. Specify the name and description of the new clone customizer for the RAC 12c cluster node VMs.
4. Modify the Clone Type to Thin Clone (see Figure 9.16). This is a fast and efficient way to create new VM clone machines.
5. Specify the network settings for the clone customizer (if any custom modifications are required).
To create the RAC-Node-01 VM from the VM template using the Advanced Clone Customizer method do the following:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine.
2. Select the Clone from an existing VM Template option (see Figure 9.17). Enter and select the following options:
• Clone count: 1
• Repository: Select OVS repository
• VM template: Select the OEL 6.3 x86_64 template
• Server pool: Select the appropriate server pool
• Description
3. Press the Finish button to create the RAC-Node-01 VM. The finished product is shown in Figure 9.18.
To edit the VM for RAC-Node-01 follow these steps:
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Edit Virtual Machine.
2. Modify the following options as shown in the configuration tab (Figure 9.19):
• Operating system: Oracle Linux 6
• Max memory (MB): 8,192
• Max processors: 8
• Networks: Specify the appropriate network(s)
• High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Boot order: Disk
• Start policy: Start on best server
• Virtual disks:
• System (virtual disk): Add another disk for Oracle binaries: 25GB
• GRID1 ASM disk group:
Qty: 6 Disks
Individual disk size: 15G
• DATA1 ASM disk group:
Qty: 6 Disks
Individual disk size: 50G
• RECO1 ASM disk group:
Qty: 1 Disk
Individual disk size: 50G
3. On the Network tab, add two VNICs (see Figure 9.20), one each for the public and private cluster interconnects.
4. Finally, on the Disks tab, attach the shared virtualized disks for the ASM disk groups (see Figure 9.21).
To start up the RAC-Node-01 VM, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Start Virtual Machine.
2. Press the Launch Console button to start the VNC console window (see Figure 9.22).
3. Configure the various options in the new VM in the first-boot interview process:
• System hostname: For example, bsfrac01.bsflocal.com (12c RAC-Node-01)
• Specify VNIC device: For example, ETH0
• Boot protocol (static/DHCP): Static
• Activate VNIC on boot: YES
• IP address of the interface: For example, 192.168.2.41 (for the public network: ensure that this is an unused IP address. If you have DNS server(s), register the IP address/hostname with it)
• Netmask: For example, 255.255.255.0
• IP address of gateway: For example, 192.168.2.1
• IP addresses of DNS servers(s): For example, 255.255.255.0
• OS root password: *******
The following sections explain how to set up and configure Node 01 for RAC 12c.
To set the network configuration of the private cluster interconnect VNIC, issue the following commands:
[root@bsfrac01 network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@bsfrac01 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@bsfrac01 network-scripts]# vi ifcfg-eth1
You have new mail in /var/spool/mail/root
[root@bsfrac01 network-scripts]# cat ifcfg-eth1
DNS1=192.168.2.1
GATEWAY=192.168.2.1
NETMASK=255.255.255.0
IPADDR=192.168.3.40
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
DEVICE=eth1
To modify the /etc/hosts file to include the relevant entries for RAC 12c, open the fie and edit it, as in the following:
[root@bsfrac01 network-scripts]# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4
::1 localhost6.localdomain6 localhost6
# PUBLIC IP Addresses of 12c RAC Cluster
192.168.2.40 bsfrac01 bsfrac01.bsflocal.com
192.168.2.41 bsfrac02 bsfrac02.bsflocal.com
192.168.2.42 bsfrac03 bsfrac03.bsflocal.com
192.168.2.43 bsfrac04 bsfrac04.bsflocal.com
192.168.2.44 bsfrac05 bsfrac05.bsflocal.com
# SCAN IP Addresses of 12c RAC Cluster
192.168.2.70 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.71 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.72 bsfrac-scan bsfrac-scan.bsflocal.com
# Virtual IP Addresses of 12c RAC Cluster
192.168.2.60 bsfrac-vip01 bsfrac-priv01.bsflocal.com
192.168.2.61 bsfrac-vip02 bsfrac-priv02.bsflocal.com
192.168.2.62 bsfrac-vip03 bsfrac-priv03.bsflocal.com
192.168.2.63 bsfrac-vip04 bsfrac-priv04.bsflocal.com
192.168.2.64 bsfrac-vip05 bsfrac-priv05.bsflocal.com
# Private Cluster Interconnect IP Addresses of 12c RAC Cluster
192.168.3.40 bsfrac-priv01 bsfrac-priv01.bsflocal.com
192.168.3.41 bsfrac-priv02 bsfrac-priv02.bsflocal.com
192.168.3.42 bsfrac-priv03 bsfrac-priv03.bsflocal.com
192.168.3.43 bsfrac-priv04 bsfrac-priv04.bsflocal.com
192.168.3.44 bsfrac-priv05 bsfrac-priv05.bsflocal.com
Single Client Access Name (SCAN) listener IP information is included in the /etc/hosts file. The SCAN IPs should be registered with the appropriate DNS server(s).
Ensure that enough TMP space is available to support RAC12c:
[root@bsfracvx1 ~]# df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_bsfracvx1-lv_root
26G 5.3G 20G 22% /
Next, disable the Linux software firewall.
This step is optional and should be exercised with caution. Only do it if you have ancillary hardware/software firewalls in place in the corporate landscape.
[root@bsfrac01 ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service iptables off
Usage: iptables {start|stop|restart|condrestart|status|panic|save}
[root@bsfrac01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: c [ OK ]
[root@bsfrac01 ~]# chkconfig iptables off
[root@bsfrac01 ~]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED
2 ACCEPT icmpv6 ::/0 ::/0
3 ACCEPT all ::/0 ::/0
4 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:22
5 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service ip6tables stop
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Unloading modules: [ OK ]
[root@bsfrac01 ~]# chkconfig ip6tables off
Edit/configure the /etc/ntp.conf file and restart the Network Time Protocol Daemon (NTPD) server on the RAC node VM.
$ vi /etc/ntp.conf
# Modify the following line to reflect the NTP servers with which the time
# will be synchronized
server 192.168.2.20
To partition, format, and mount /u01 on the 25-GB local virtual hard disk, start by doing the following:
[root@bsfrac01 /]# mkdir /u01
You have new mail in /var/spool/mail/root
[root@bsfrac01 /]# mount /dev/xvdb1 /u01
[root@bsfrac01 /]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9985 8087 1392 86% /
tmpfs 1940 1 1940 1% /dev/shm
/dev/xvda1 99 50 45 53% /boot
/dev/xvdb1 25195 172 23743 1% /u01
Make the mount point persistent by modifying the /etc/fstab file:
#
# /etc/fstab
# Created by anaconda on Fri Sep 7 08:14:40 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
LABEL=/ / ext4 defaults 1 1
LABEL=/boot /boot ext4 defaults 1 2
/dev/xvda3 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb1 /u01 ext4 defaults 0 0
Disable the SELINUX option by modifying the following file:
[root@bsfrac01 /]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing—SELinux security policy is enforced.
# permissive—SELinux prints warnings instead of enforcing.
# disabled—No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE—Can take one of these two values:
# targeted—Targeted processes are protected,
# mls—Multilevel security protection.
SELINUXTYPE—Targeted
Install VSFTPD (FTP server) by performing the following:
[root@bsfrac01 ~]# yum install vsftpd
This step is optional.
Install the X Window System desktop by performing the following steps:
[root@bsfrac01 /]# yum groupinstall "X Window System" desktop
Loaded plugins: security
Setting up Group Process
Package 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 already installed and latest version
Package hal-0.5.14-11.el6.x86_64 already installed and latest version
Package 1:dbus-1.2.24-7.0.1.el6_3.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:0.8.1-34.el6_3 will be installed
.
.
.
openssh-server x86_64 5.3p1-84.1.el6
ol6_latest 298 k
rhn-check noarch 1.0.0-87.0.6.el6
ol6_latest 60 k
rhn-client-tools noarch 1.0.0-87.0.6.el6
ol6_latest 492 k
rhn-setup noarch 1.0.0-87.0.6.el6
ol6_latest 96 k
Transaction Summary
=====================================================================================================================================================================================================
Install 265 Package(s)
Upgrade 19 Package(s)
Total download size: 123 M
Is this ok [y/N]: y
Downloading Packages:
(1/284): ConsoleKit-x11-0.4.1-3.el6.x86_64.rpm | 20 kB 00:00
(2/284): DeviceKit-power-014-3.el6.x86_64.rpm | 90 kB 00:00
.
.
.
Dependency Updated:
libreport.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-cli.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-kerneloops.x86_64 0:2.0.9-5.0.1.el6_3.2
libreport-plugin-logger.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-
mailx.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-reportuploader.x86_64 0:2.0.9-
5.0.1.el6_3.2
libreport-python.x86_64 0:2.0.9-5.0.1.el6_3.2 nspr.x86_64 0:4.9.2-
0.el6_3.1 nss.x86_64 0:3.13.6-2.0.1.el6_3
nss-sysinit.x86_64 0:3.13.6-2.0.1.el6_3 nss-tools.x86_64 0:3.13.6-2.0.1.el6_3 nss-util.x86_64 0:3.13.6-1.el6_3
openssh.x86_64 0:5.3p1-84.1.el6 openssh-clients.x86_64
0:5.3p1-84.1.el6 openssh-server.x86_64 0:5.3p1-84.1.el6
rhn-check.noarch 0:1.0.0-87.0.6.el6 rhn-client-tools.noarch
0:1.0.0-87.0.6.el6 rhn-setup.noarch 0:1.0.0-87.0.6.el6
Complete!
The output of the X Window System desktop installation is very long and has been abbreviated.
Modify the /etc/inittab file to start with a GUI login and reboot the system:
#id:3:initdefault: # Change Option 3 to 5 as shown in the following lineid:5:initdefault:
To reboot, issue the following command:
[root@bsfrac01 network-scripts]# shutdown -r
After a successful reboot, you will arrive at the login screen (see Figure 9.23).
To verify the network settings after the reboot, do the following:
[root@bsfrac01 /]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:21:F6:00:00:01
inet addr:192.168.2.40 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:49103 errors:0 dropped:117 overruns:0 frame:0
TX packets:12982 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33876842 (32.3 MiB) TX bytes:939705 (917.6 KiB)
Interrupt:57
eth1 Link encap:Ethernet HWaddr 00:21:F6:00:00:00
inet addr:192.168.3.40 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25180 errors:0 dropped:117 overruns:0 frame:0
TX packets:198 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1316336 (1.2 MiB) TX bytes:12163 (11.8 KiB)
Interrupt:58
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:930 (930.0 b) TX bytes:930 (930.0 b)
Perform the following checks (steps 9.19–9.28) to satisfy the prerequisites for RAC 12c on Node 01.
To check that the space requirement has been met, do the following (10 GB is recommended):
[oracle@bsfrac01 Database]$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 9.8G 7.9G 1.4G 86% /
Issue these commands to create the OS groups:
[root@bsfrac01 /]# groupadd -g 54327 asmadmin
[root@bsfrac01 /]# groupadd -g 54328 asmoper
[root@bsfrac01 /]# groupadd -g 54329 asmadmin
[root@bsfrac01 /]# groupadd -g 54324 asmdba
[root@bsfrac01 /]# groupadd -g 54324 backupdba
[root@bsfrac01 /]# groupadd -g 54325 dgdba
[root@bsfrac01 /]# groupadd -g 54326 kmdba
[root@bsfrac01 /]# groupadd -g 54321 oinstall
[root@bsfrac01 /]# groupadd -g 54322 dba
[root@bsfrac01 /]# groupadd -g 54323 oper
Some of the preceding steps are optional, and whether they should be done depends on the user’s job function; for example, DBA, DMA, storage/system administrator, or other role.
To create the oracle and grid OS users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software owners, respectively, and set their initial passwords, issue these commands:
[root@bsfrac01 /]# useradd -u 54321 -g oinstall -G dba,asmdba oracle
[root@bsfrac01 /]# useradd -u 54322 -g oinstall -G asmadmin,asmdba grid
[root@bsfrac01 /]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@bsfrac01 /]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
As the root OS user, run the following commands:
[root@bsfrac01 /]# mkdir -p /u01/app/12.1.0/grid
[root@bsfrac01 /]# mkdir -p /u01/app/grid
[root@bsfrac01 /]# mkdir -p /u01/app/oracle
[root@bsfrac01 /]# chown -R grid:oinstall /u01
[root@bsfrac01 /]# chown oracle:oinstall /u01/app/oracle
[root@bsfrac01 /]# chmod -R 775 /u01/
Check the required and relevant permissions set for the OFA directory structure:
[root@bsfrac01 oracle]# ls -l /u01
total 4
drwxrwxr-x 5 grid oinstall 4096 Feb 25 23:29 app
[root@bsfrac01 oracle]# ls -l /u01/app/
total 12
drwxrwxr-x 3 grid oinstall 4096 Feb 25 23:29 12.1.0
drwxrwxr-x 2 grid oinstall 4096 Feb 25 23:29 grid
drwxrwxr-x 2 oracle oinstall 4096 Feb 25 23:29 oracle
Configure the NTPD:
[root@bsfrac01 ~]# service ntpd start
Shutting down ntpd: [ OK ]
[root@bsfrac01 ~]# chkconfig ntpd on
Do the following to turn off and unconfigure the Avahi daemon:
[root@bsfrac01 ~]# service avahi-daemon stop
Shutting down Avahi daemon: [ OK ]
[root@bsfrac01 ~]# chkconfig avahi-daemon off
Within OEL, using the GUI software installer (see Figure 9.24) or using the rpm command-line utility, ensure that the following packages for OEL 6.x x86_64 are installed with greater-than/equal-to versions. Additionally, download and install ancillary packages to aid with the performance of the RAC.
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
ksh
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
[root@bsfrac01 ~]# rpm -qa glibc*
glibc-common-2.12-1.80.el6_3.5.x86_64
glibc-devel-2.12-1.80.el6_3.5.x86_64
glibc-2.12-1.80.el6_3.5.x86_64
glibc-headers-2.12-1.80.el6_3.5.x86_64
[root@bsfrac01 ~]# rpm -qa libstdc++*
libstdc++-4.4.6-4.el6.x86_64
libstdc++-devel-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa gcc*
gcc-c++-4.4.6-4.el6.x86_64
gcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa ksh*
ksh-20100621-16.el6.x86_64
[root@bsfrac01 ~]# rpm -qa make*
make-3.81-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa sysstat*
sysstat-9.0.4-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libaio*
libaio-devel-0.3.107-10.el6.x86_64
libaio-0.3.107-10.el6.x86_64
[root@bsfrac01 ~]# rpm -qa binutils*
binutils-2.20.51.0.2-5.34.el6.x86_64
[root@bsfrac01 ~]# rpm -qa compat-lib*
compat-libcap1-1.10-1.x86_64
compat-libstdc++-33-3.2.3-69.el6.x86_64
Next, create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups:
[root@bsfrac01 ~]# fdisk /dev/xvdc
device contains neither a valid DOS partition table, nor Sun, SGI, or OSF disklabel.
Building a new DOS disklabel with disk identifier 0x6a917f21.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Command (m for help): w
The partition table has been altered!
Repeat the preceding steps for all the ASM disks, including grid infrastructure disks.
Verify the partition structures for the underlying disks:
[root@bsfrac01 /]# fdisk –l
Configure the ASM library by performing the following steps:
[root@bsfrac01 dev]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl+C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmdba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK01 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK02 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK03 /dev/xvde1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK04 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK05 /dev/xvdg1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK01 /dev/xvdh1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK02 /dev/xvdi1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK03 /dev/xvdj1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK04 /dev/xvdk1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK05 /dev/xvdl1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK06 /dev/xvdm1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk RECO1DISK06 /dev/xvdn1
Writing disk header: done
Instantiating disk: done
You can also choose to set up and configure UDEV rules for the ASM disks:
[root@bsfrac01 dev]# /etc/init.d/oracleasm listdisks
DATA1DISK01
DATA1DISK02
DATA1DISK03
DATA1DISK04
DATA1DISK05
DATA1DISK06
GRID1DISK01
GRID1DISK02
GRID1DISK03
GRID1DISK04
GRID1DISK05
RECO1DISK06
Download, unzip, and stage the Oracle software grid and database software binaries:
[oracle@bsfrac01 Database]$ unzip -q linuxx64_database_12.1BETA_130131_1of2.zip
Repeat the unzip process for all the software binary zip files, and verify the unzipped and staged directory structure.
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM → Right-click → Clone or Move.
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button.
As shown in Figure 9.25, this action temporarily removes all the shared ASM virtual disks. You need to do this; otherwise, clones of these disks will be created unnecessarily during the ensuing cloning process for all the RAC nodes.
To clone the other nodes of the RAC, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM and right-click. Choose Clone or Move (see Figure 9.26).
2. Select the Create a clone of this VM option.
3. Select and enter the following options:
• Target server pool
• Description
• Clone to a: VM
• Clone count: Specify the clone count for the RAC nodes
As shown in Figure 9.27, all the nodes for a five-node RAC have been successfully created and are ready for further configuration.
Next reattach the shared disks to the RAC node OVMs:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button (see Figure 9.28).
2. Select the shared disks.
3. Repeat the preceding steps for all the RAC node VMs.
1. Go to System → Preferences → Network Settings → IPv4 Settings.
2. Modify the IP addresses of the eth0 (public) and eth1 (private cluster interconnect) NICs, as shown in Figure 9.29.
3. Repeat for all of the RAC nodes.
As you can see from the preceding sections, setting up and installing Oracle RAC is all about doing an extensive amount of homework in the right way. To summarize the activity covered in the last few sections, ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup:
• Dedicated virtual network for RAC has been created and configured.
• Virtualized shared ASM disks for the GRID1 ASM disk group are created and ready for use.
• VMs that will constitute the nodes of the RAC 12c have been created, set up, and configured.
• OEL 6.x is set up and configured on RAC-Node-01 using two alternative approaches; installing it from scratch or using the downloadable templates for OVM for x86.
• The VMs for the other nodes of the RAC 12c have been cloned from the RAC-Node-01 VM.
It’s now time to set up Oracle grid infrastructure and get the RAC 12c bird off the ground and in the air.
As shown in Figure 9.30, at this point you want to start the VMs for the RAC 12c cluster.
Enter the information and make the selections in the Wizard Entry screens of the Oracle Universal Installer (OUI), as shown in Figures 9.31 through 9.45. In some cases, you will need to edit according to the specific needs of your organization.
1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches (see Figure 9.31), or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option (see Figure 9.32).
3. Select the Configure a Flex Cluster option (see Figure 9.33).
4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS) (see Figure 9.34).
6. Enter the relevant information for the RAC 12c nodes, including for HUB and LEAF nodes (see Figure 9.35).
7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes, as in Figure 9.36.
8. Once step 7 is done, the system tests the SSH connectivity between the nodes (see Figure 9.37).
9. Specify the network interfaces for public, private cluster interconnect, and ASM. The system will validate those as well (see Figure 9.38).
10. Select the Configure Grid Infrastructure Management Repository option (see Figure 9.39).
11. If you choose No, then the message in Figure 9.40 is displayed.
12. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level (see Figure 9.41).
13. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
14. Select the Do Not Use Intelligent Platform Management Interface (IPMI) option.
15. Specify the OS groups for ASM.
16. Enter the Oracle BASE and HOME locations.
17. Enter the Oracle inventory location (see Figure 9.42).
18. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts (see Figure 9.43).
19. Generate and run any runfixup.sh scripts to remediate any prerequisite issues (see Figure 9.44).
20. Press Install to initiate the installation process for grid infrastructure (see Figure 9.45).
The following output will appear in the SSH window:
[oracle@bsfrac01 grid]$ pwd
/u01/app/oracle/software/Ora12c/Grid/grid
[oracle@bsfrac01 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 5620 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2047 MB Passed
Checking monitor: must be configured to display at least 256 colors.
Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from
/tmp/OraInstall2013-02-26_01-00-01AM. Please wait ...
The following grid infrastructure processes or daemons should appear on the RAC nodes after setup.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7619 oracle 20 0 98432 1252 612 S 6.9 0.0 30:33.70 sshd
24066 oracle 20 0 612m 48m 15m S 4.0 1.7 11:10.74 gipcd.bin
24700 oracle -2 0 1368m 16m 14m S 4.0 0.6 10:20.84 asm_vktm_+asm1
28408 oracle -2 0 1368m 15m 13m S 4.0 0.5 10:06.05 apx_vktm_+apx1
24336 oracle RT 0 1157m 182m 80m S 3.6 6.4 9:49.82 ocssd.bin
24805 root RT 0 788m 101m 71m S 3.6 3.6 8:03.76 osysmond.bin
7670 oracle 20 0 1580m 227m 6076 S 2.3 8.0 71:35.10 java
23757 root 20 0 1194m 59m 19m S 2.3 2.1 5:36.65 ohasd.bin
24963 root 20 0 1146m 43m 16m S 2.3 1.5 6:24.32 orarootagent.bi
24812 root 20 0 1247m 79m 26m S 1.6 2.8 4:42.22 crsd.bin
24011 oracle 20 0 1150m 34m 17m S 1.3 1.2 1:46.04 oraagent.bin
24454 root 20 0 791m 36m 14m S 1.3 1.3 3:02.20 octssd.bin
25086 oracle 20 0 1754m 151m 19m S 1.3 5.3 2:53.32 java
3728 oracle 20 0 15180 1256 896 R 1.0 0.0 0:00.08 top
24024 oracle 20 0 667m 38m 15m S 1.0 1.4 3:04.61 evmd.bin
24311 root RT 0 918m 106m 71m S 0.7 3.7 0:16.43 cssdmonitor
24720 oracle -2 0 1382m 33m 20m S 0.7 1.2 1:41.75 asm_lms0_+asm1
24864 root RT 0 849m 160m 71m S 0.7 5.6 1:48.41 ologgerd
57 root 20 0 0 0 0 S 0.3 0.0 0:12.15 kworker/1:1
24043 oracle 20 0 659m 36m 14m S 0.3 1.3 0:10.50 gpnpd.bin
24655 oracle 20 0 1368m 19m 17m S 0.3 0.7 0:07.28 asm_pmon_+asm1
24710 oracle 20 0 1374m 25m 17m S 0.3 0.9 0:26.34 asm_diag_+asm1
24716 oracle 20 0 1385m 40m 25m S 0.3 1.4 1:11.91 asm_lmon_+asm1
24718 oracle 20 0 1383m 30m 17m S 0.3 1.1 0:42.79 asm_lmd0_+asm1
24951 oracle 20 0 1180m 73m 20m S 0.3 2.6 2:35.66 oraagent.bin
25065 oracle 20 0 1050m 9072 1268 S 0.3 0.3 0:02.38 ons
30490 oracle 20 0 1373m 28m 23m S 0.3 1.0 0:02.43 oracle_30490_+a
Hub Node(s):
2318 gdm 20 0 332m 9924 8924 S 13.9 0.3 0:53.31 gnome-settings-
10097 oracle -2 0 1368m 15m 13m S 5.3 0.6 7:47.66 apx_vktm_+apx2
8958 oracle 20 0 602m 47m 15m S 4.6 1.7 9:34.31 gipcd.bin
9173 root RT 0 789m 101m 71m S 4.0 3.6 6:32.76 osysmond.bin
9002 oracle RT 0 1159m 175m 80m S 3.6 6.1 10:26.57 ocssd.bin
9506 oracle -2 0 1368m 16m 14m S 3.6 0.6 8:41.92 asm_vktm_+asm2
8809 root 20 0 1190m 68m 30m S 2.0 2.4 4:42.47 ohasd.bin
8909 oracle 20 0 1150m 39m 20m S 1.3 1.4 2:53.17 oraagent.bin
8922 oracle 20 0 666m 39m 16m S 1.3 1.4 2:41.23 evmd.bin
9151 root 20 0 725m 36m 14m S 1.3 1.3 2:32.39 octssd.bin
9281 root 20 0 748m 29m 16m S 1.3 1.0 3:35.72 orarootagent.bi
9180 root 20 0 1240m 68m 30m S 1.0 2.4 3:15.15 crsd.bin
9521 oracle 20 0 1379m 33m 23m S 0.7 1.2 0:56.36 asm_dia0_+asm2
9528 oracle -2 0 1382m 33m 20m S 0.7 1.2 1:29.06 asm_lms0_+asm2
14933 oracle 20 0 15080 1144 812 R 0.7 0.0 0:03.13 top
2207 root 20 0 121m 10m 6116 S 0.3 0.4 0:04.20 Xorg
9347 oracle 20 0 1164m 61m 22m S 0.3 2.1 2:01.56 oraagent.bin
9516 oracle 20 0 1374m 25m 17m S 0.3 0.9 0:22.80 asm_diag_+asm2
9523 oracle 20 0 1385m 40m 25m S 0.3 1.4 1:01.14 asm_lmon_+asm2
9526 oracle 20 0 1383m 30m 17m S 0.3 1.1 0:38.67 asm_lmd0_+asm2
9532 oracle 20 0 1370m 21m 17m S 0.3 0.8 0:13.08 asm_lmhb_+asm2
Leaf Node(s):
$ ./crs_stat -t -v
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4085 oracle 20 0 602m 44m 14m S 5.6 1.6 0:17.24 gipcd.bin
4179 root RT 0 724m 100m 71m S 4.6 3.5 0:08.74 osysmond.bin
3940 root 20 0 1191m 60m 29m S 2.6 2.1 0:14.51 ohasd.bin
4161 root 20 0 725m 28m 13m S 2.0 1.0 0:04.29 octssd.bin
4188 root 20 0 1202m 55m 27m S 2.0 1.9 0:07.27 crsd.bin
4051 oracle 20 0 667m 31m 15m S 1.7 1.1 0:05.16 evmd.bin
4128 oracle RT 0 987m 127m 79m S 1.3 4.5 0:05.97 ocssdrim.bin
4470 oracle 20 0 15080 1100 812 R 0.7 0.0 0:00.08 top
4114 root RT 0 852m 104m 71m S 0.3 3.7 0:01.12 cssdagent
As part of the Flex ASM configuration (new feature), ASM is not running on the leaf node(s).
[oracle@bsfrac01 bin]$ ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5 0/ ONLINE ONLINE bsfrac01
ora.GRID1.dg ora....up.type 0/5 0/ ONLINE ONLINE bsfrac01
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE bsfrac01
ora....AF.lsnr ora....er.type 0/5 0/ OFFLINE OFFLINE
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfrac02
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfrac01
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE bsfrac04
ora.MGMTLSNR ora....nr.type 0/0 0/0 OFFLINE OFFLINE
ora.asm ora.asm.type 0/5 0/0 ONLINE ONLINE bsfrac01
ora....01.lsnr application 0/5 0/0 ONLINE ONLINE bsfrac01
ora....c01.ons application 0/3 0/0 ONLINE ONLINE bsfrac01
ora....c01.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfrac01
ora....02.lsnr application 0/5 0/0 ONLINE ONLINE bsfrac02
ora....c02.ons application 0/3 0/0 ONLINE ONLINE bsfrac02
ora....c02.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfrac02
ora....03.lsnr application 0/5 0/0 ONLINE ONLINE bsfrac03
ora....c03.ons application 0/3 0/0 ONLINE ONLINE bsfrac03
ora....c03.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfrac03
ora....04.lsnr application 0/5 0/0 ONLINE ONLINE bsfrac04
ora....c04.ons application 0/3 0/0 ONLINE ONLINE bsfrac04
ora....c04.vip ora....t1.type 0/0 0/0 ONLINE ONLINE bsfrac04
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE bsfrac01
ora.gns ora.gns.type 0/5 0/0 ONLINE ONLINE bsfrac01
ora.gns.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfrac01
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE bsfrac01
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE bsfrac01
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE bsfrac01
ora.proxy_advm ora....vm.type 0/5 0/ ONLINE ONLINE bsfrac01
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfrac02
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfrac01
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE bsfrac04
As economies the world over continue to shrink, organizations and entities are looking for new and innovative ways to achieve the next dimension of corporate efficiency. Virtualization is one of the key elements of modern cloud computing, providing the gateway to a cost-effective, agile, and elastic IT environment.
Based on the mainstream open-source Xen hypervisor, OVM along with Oracle RAC and EM12c can be used to formulate, build, and set up end-to-end industry-grade, state-of-the-art virtualized database clouds. If you’re not already on it, now is the perfect time to hop on the virtualization and cloud computing bandwagon.
The methods outlined in this chapter and the next are a perfect avenue to do so. They provide a step-by-step guide for setting up and installing your own RAC 12c environment either at home or in a corporate setting. Please continue reading on to the next chapter to finish the RAC 12c database cloud journey that we started in this one.