Chapter 7. Grid Infrastructure Installation

In Oracle 10g and 11.1, installing Oracle RAC can be a two- or three-stage process. The recommended configuration is to install the Oracle Clusterware, ASM, and RDBMS software in three separate Oracle homes. Oracle Clusterware must be in a separate Oracle home for architectural reasons. However, Oracle recommends that ASM and RDBMS be separated to provide more flexible options during upgrades. For many sites, the ASM and RDBMS software is identical; however, it is still common for users not requiring this level of flexibility to install ASM and RDBMS in the same Oracle home.

Prior to Oracle 11.2, the order of installation was therefore Oracle Clusterware, ASM (if required), and finally, the RDBMS software. In Oracle 11.2, Oracle Clusterware and ASM have been merged together into a single Oracle home known as the Grid Infrastructure home. The RDBMS software is still installed into a separate Oracle home. In Oracle 11.2, the order of installation is therefore Oracle Grid Infrastructure (including Oracle Clusterware and ASM), followed by the RDBMS software.

It is no longer possible to install Oracle Clusterware and ASM separately. This change has particular relevance in single-instance environments, which now require a special single-node Grid Infrastructure installation that includes a cut-down version of Oracle Clusterware in addition to ASM.

Getting Ready for Installation

In this section, we discuss the basic installation. Your first step will be to obtain the software distribution. Next, you will configure your X-Windows environment. Finally, you will determine whether to configure the grid software manually or to let the installer configure the software automatically.

Obtain Software Distribution

The first step is to obtain the Grid Infrastructure software distribution. The software can be downloaded from the Oracle Technology Network at www.oracle.com/technetwork/index.html. Alternatively, it may be available on DVD-ROM. In Oracle 11.2, software is delivered as a zip file that can be unzipped into a staging area.

For example, this snippet creates a suitable staging area:

[oracle@london1]$ mkdir /home/oracle/stage   [oracle@london1]$ cd /home/oracle/stage

Next, download the software into the staging area and unzip the download file:

[oracle@london1]$ unzip linux_11gR2_grid.zip

This unzip process will create a directory called /grid below /home/oracle/stage that contains the installation files.

Configure X Environment

Both the Grid Infrastructure and the RDBMS software must be installed using the Oracle Universal Installer (OUI). This tool runs in both interactive and silent modes. In interactive mode, the OUI is a GUI tool that requires an X environment. Due to the complexity of the installation process, we recommend that you initially perform installations in interactive mode because this mode allows many errors to be corrected without restarting the installation process. The silent mode is intended for installations on large numbers of nodes and for sites where you wish to ensure a high level of standardization. The installer also allows a response file to be generated at the end of the interview process. This file can optionally be modified and used as the basis for subsequent installations.

Checking Prerequisites

In Oracle 11.2, the OUI has been completely rewritten and is more user-friendly than in previous versions. Checking of prerequisites has been improved, and the installer offers a facility where it can generate fixup scripts for a limited selection of configuration errors and omissions. These fixup scripts should be executed manually by the root user before the installation can proceed.

The Cluster Verification Utility (CVU) has been fully integrated into the OUI. It forms the basis of the prerequisite checks, and it is also executed as the final step of the Grid Infrastructure installation. Note that Oracle still recommends that the CVU still be executed in standalone mode prior to installation. It also recommends that you run CVU before and after operations that add or delete nodes.

As in previous releases, the OUI interactive installation installs the software onto the first node and then relinks the executables, if necessary. The linked executables are then copied to the remaining nodes in the cluster. To use the OUI in interactive mode, an X environment must be available. You can either run an X environment on the console of the installation node, or you can use VNC.

The simplest way to do this is to perform the installation in the console window of the first node. The most popular Linux desktop environments are GNOME and KDE. Either can be used for OUI installations, as well as for GUI-based configuration assistants such as DBCA and ASMCA.

Alternatively, a VNC server can be configured to run on the installation node that allows a desktop environment to be presented on a separate VNC client. Using VNC is a better solution if you are in a different location than the database servers.

Starting an X environment in a console window

X is started automatically in Linux run level 5. However, in Oracle 11.2 it is recommended that Linux servers use run level 3, so X will not be available. To start X, log in to a console session as the root user and enter this snippet:

[root@london1]# startx

This will start a GUI desktop session on the same terminal. By default, Linux has a series of virtual sessions. And by convention, the X environment runs in the seventh session. You can switch to the X environment by holding down the CTRL+ALT+7 keys simultaneously; you can switch back to the shell sessions using CTRL+ALT-1, CTRL+ALT-2, and so on. Sometimes, this can be useful for problem diagnosis. However, once the desktop session is running, you can also start a terminal window from within the desktop.

To allow any user to run an X session, first start a terminal session as the root user and enter this line:

[root@london1]# xhost +

Next, change to the oracle user:

[root@london1]# su - oracle

Starting an X environment using VNC

An alternative option is to use VNC. VNC is a free client/server product in which a VNC server session runs on the server and a VNC Viewer runs on the client.

On the server, VNC is supplied with the operating system in Red Hat and Oracle Enterprise Linux distributions. For OEL5R2, the VNC server RPM is vnc-server-4.1.2-9.el5. In Linux, VNC is implemented as a service. You can check whether it is currently installed using this line:

[root@london1]# rpm -q vnc-server

If the package is not installed, then install it from the Linux distribution, as in this example:

[root@london1]# rpm -ivh vnc-server-4.1.2-9.el5.rpm

The following line checks whether the vncserver service is currently running:

[root@london1]# service vncserver status   Xvnc is stopped

If the vncserver service is not currently running, then you can start it with this line:

[root@london1]# service vncserver start

If you receive the following message, then you need to configure VNC on the server:

Starting VNC server: no displays   configured                           [ OK ]

To configure VNC on the server, log in as root and add the following lines to /etc/sysconfig/vncservers:

VNCSERVERS="1:oracle"   VNCSERVERARGS[1]="-geometry 1024x768"

Next, set the geometry to reflect the display size available on the client. Note that the server number must be specified when connecting from the client. The following example indicates that you are connecting to server 1:

server14:1

Now log in as the oracle user and run vncpasswd:

oracle@london1]$ vncpasswd
 Password: <enter password>
 Verify: <enter password again>

And now log in as root:

[root@london1]# service vncserver start
   Starting VNC server: 1: oracle xauth: creating new authority   file /home/oracle/.XAuthority

   New 'london1.example.com:1 (oracle)' desktop is   london1.example.com:1
      Creating default startup script /home/oracle/.vnc/xstartup
   Starting applications specified in  /home/oracle/.vnc/xstartup
   Log file is /home/oracle/.vnc/london1.example.com:1.log

Next, you need to modify /home/oracle/.vnc/xstartup by uncommenting the following two lines:

unset SESSION_MANAGER
   exec /etc/X11/xinit/xinitrc

Now log in as root again and restart the vncserver service:

[root@london1]# service vncserver restart

The vncserver service can be permanently enabled using this line:

[root@london1]# chkconfig vncserver on

The preceding command will start vncserver at run levels 2, 3, 4, and 5.

For the client, VNC can be downloaded from a number of sources, such as www.realvnc.com. At the time of writing, the current version of VNC was 4.1.3. Versions are available for Windows, Linux, and several other UNIX-based operating systems.

For Windows-based clients, VNC can be downloaded as either a ZIP file or an executable file. In the following example, the zip file was called vnc-4_1_3-x86_win32.zip and contained a single executable file called vnc-4_1_3-x86_win32.exe. This executable runs the VNC Setup Wizard, which allows you to install the VNC server, the viewer, or both.

The next example adds VNC server to the Windows menus:

Start > Programs > RealVNC > VNC Viewer 4 > Run VNC Viewer

Enter the server name when prompted (e.g. london1:1), and then enter the vnc password for the oracle user.

Note

Other connection options exist, such as using Hummingbird Exceed or Xming. However, in our opinion, the two connection methods just described provide the simplest ways to achieve the desired behavior.

Determining Configuration Type

In Oracle 11.2 you can either perform a manual configuration in which case you are responsible for assigning names and IP addresses to all components in your cluster, or you can perform an automatic configuration in which case Oracle can assign names and IP addresses to a limited number of components in the cluster. For the automatic configuration, Oracle can assign names and IP addresses for the private network (interconnect), VIP addresses and SCAN VIP addresses. Oracle often refers to the automatic configuration as a Grid Naming Service (GNS) configuration. If you choose to perform an automatic configuration, we recommend that you still continue to configure the private network manually in order to be able test the private network prior to installation of Grid Infrastructure. The VIP and SCAN VIP addresses use the public network which must be configured manually for individual nodes prior to installation.

For both manual and automatic configurations, we recommend that you configure a DNS server in your environment. It is still theoretically possible to install Grid Infrastructure without a DNS server. However, you will receive some error messages from the Cluster Verification Utility at the end of the installation session if you do this. If you do not already have a DNS server in your environment, then it is relatively simple to create one locally. This may be necessary if you are installing Grid Infrastructure in a test environment.

For the automatic configuration, a DHCP server must also be available in your environment. Again, if a DHCP server is not available, you can configure one locally for testing purposes. Chapter 6 covers how to configure both DNS and DHCP in detail.

Also in Oracle 11.2, you can perform either a Typical or an Advanced Grid Infrastructure installation. If you have configured your environment manually, then you can use either the Typical or Advanced installations; if you wish to use the automatic configuration, you must select the Advanced installation.

As you would expect, the Typical Installation makes a few assumptions about your installation decisions. The Advanced Installation allows you to override some default options and only requires a couple of additional steps. In particular, the Advanced Installation option allows you to specify a name for your cluster; as we will see later, the Typical Installation option derives the name of the cluster from the cluster SCAN name, and that name may not always be appropriate. It is also easier to troubleshoot installation errors in the Advanced Installation because you will have entered more of the configuration options yourself. Therefore, we generally recommend that you perform an Advanced Installation, regardless of whether you wish to use manual or automatic configuration.

In the following sections, we will describe all three installation types in the following order:

  • Advanced Installation - manual configuration (without GNS)

  • Advanced Installation - automatic configuration (with GNS)

  • Typical Installation - manual configuration (without GNS)

We will also fully describe the Advanced Installation with manual configuration. For the remaining two installation types, we will only discuss the differences between these and the Advanced / manual installation.

Advanced Installation - Manual Configuration

In this section we will discuss how to implement an advanced installation using the manual configuration option. We will assume that DNS is installed and configured, as detailed in Chapter 6, but we will also include the actual network configuration used for the installation of a four-node cluster.

Network Configuration

The first step is to configure the network infrastructure. In this example, values are configured in DNS and /etc/hosts. Tables 7-1 through 7-6 show the various values that we used. The values are based on the example configurations described in Chapter 6.

Table 7.1. DNS Settings

Domain Name

example.com

Server Name

dns1.example.com

Server Address

172.17.1.1

Table 7.2. DHCP Settings

Server address

Not required

Address range

Not required

Table 7.3. DNS Settings

Sub domain

Not required

Host name

Not required

VIP address

Not required

Table 7.4. SCAN Listeners

SCAN Name

cluster1-scan.example.com

SCAN Addresses

172.17.1.205, 172.17.1.206, 172.17.1.207

Table 7.5. Public Network Settings

Network

172.17.0.0

Gateway

172.17.0.254

Broadcast address

172.17.255.255

Subnet mask

255.255.0.0

Node names

london1, london2, london3, london4

Public IP address

172.17.1.101,172.17.1.102,172.17.1.103,172.17.1.104

VIP names

london1-vip london2-vip london3-vip london4-vip

VIP addresses

172.17.1.201,172.17.1.202,172.17.1.203,172.17.1.204

Table 7.6. Private Network Settings

Network

192.168.1.0

Subnet mask

255.255.255.0

Names

london1-priv london2-priv london3-priv london4-priv

IP addresses

192.168.1.1,192.168.1.2,192.168.1.3, 192.168.1.4

DNS Configuration

Your next task is to verify that the bind package has been installed and is running. Chapter 6 shows you how to do that. In this example, /etc/named.conf has exactly the same format that is shown in Chapter 6.

For example, forward lookups are configured in master.example.com, as shown here:

$TTL 86400
   @             IN SOA dns1.example.com. root.localhost
                          2010063000          ; serial
                          28800               ; refresh
                          14400               ; retry
                          3600000             ; expiry
                          86400 )             ; minimum
   @             IN NS  dns1.example.com.
   localhost     IN A   127.0.0.1
   dns1          IN A   172.17.1.1
   london1       IN A   172.17.1.101
   london2       IN A   172.17.1.102
   london3       IN A   172.17.1.103
   london4       IN A   172.17.1.104
   london1-vip   IN A   172.17.1.201
   london2-vip   IN A   172.17.1.202
   london3-vip   IN A   172.17.1.203
   london4-vip   IN A   172.17.1.204
   cluster1-scan IN A   172.17.1.205
                 IN A   172.17.1.206
                 IN A   172.17.1.207

And reverse lookups for example.com are configured in 172.17.1.rev:

$TTL 86400
   @             IN SOA dns1.example.com. root.localhost. (
2010063000          ; serial
                          28800               ; refresh
                          14400               ; retry
                          3600000             ; expiry
                          86400 )             ; minimum
   @       IN NS  dns1.example.com.
   1       IN PTR dns1.example.com.
   101     IN PTR london1.example.com.
   102     IN PTR london2.example.com.
   103     IN PTR london3.example.com.
   104     IN PTR london4.example.com.
   201     IN PTR london1-vip.example.com.
   202     IN PTR london2-vip.example.com.
   203     IN PTR london3-vip.example.com.
   204     IN PTR london4-vip.example.com.

Forward and reverse lookups for the local domain must also be configured; again, see Chapter 6 for more information on how to do this.

On each node in the cluster, /etc/resolv.conf was configured as follows:

search example.com
   nameserver 172.17.1.1
   options attempts:2
   options timeout:1

In the next example, we configured the private network addresses in /etc/hosts on each node:

192.168.1.1        london1-priv.example.com london1-priv
   192.168.1.2        london2-priv.example.com london2-priv
   192.168.1.3        london3-priv.example.com london3-priv
   192.168.1.4        london4-priv.example.com london4-priv

Note that, in the preceding example, it is not strictly necessary to include a fully qualified domain name for the private network addresses. However, we show both the fully qualified and unqualified names for completeness and compatibility with earlier releases.

Choosing an Installation Option

Now run the installer as the user who will own the Grid Infrastructure software and look at your installation options. If you are installing the Grid Infrastructure under a user called grid, then you should run the installer as the grid user. The following example starts OUI as the oracle user:

[oracle@london1]$ /home/oracle/stage/grid/runInstaller

The installer is written in Java, and it takes a couple of minutes to load. When it does load, you'll see the page shown in Figure 7-1.

Choosing an installation option

Figure 7.1. Choosing an installation option

The Installation Option page provides the following options:

  • Install and Configure Grid Infrastructure for a Cluster: If you are building a multimode cluster, we recommend that you select this option, which installs the Grid Infrastructure software (Clusterware and ASM) and configures the Clusterware files (OCR and Voting disks). If you choose to locate the Clusterware files in ASM, then an ASM disk group will also be created during installation. This option should also be used if you plan to deploy Oracle RAC One-node.

  • Install and Configure Grid Infrastructure for a Standalone Server: This option should be used if you want to build a single-node cluster. The Grid Infrastructure installation will include a cut-down version of Oracle Clusterware and ASM. This option is not appropriate for Oracle RAC One-node, which requires a minimum of two nodes in the cluster.

  • Upgrade Grid Infrastructure: This option should be selected if you have an older version of Oracle Clusterware installed on the cluster. The following versions of Oracle Clusterware can be upgraded directly:

    • Oracle 10g Release 1 - 10.1.0.3 or above

    • Oracle 10g Release 2 - 10.2.0.3 or above

    • Oracle 11g Release 1 - 11.1.0.6 or above

    If you are currently running Oracle Clusterware, and it is not one of the aforementioned versions, then you will need to upgrade to the latest patch set for your release before installing Oracle 11.2.

  • Install Grid Infrastructure Software Only: In Oracle 11.2, it is possible to perform a software-only installation of Grid Infrastructure and to run the root scripts that initalize the cluster environment at a later time. This option will be particularly useful for sites where the DBA does not have sufficient privileges, and the root scripts consequently have to be run by a system administrator. In previous releases, it was necessary to run the root scripts before the OUI session could complete, which meant some co-ordination was required between the DBA and system administrator. In this release, however, the OUI session can complete, and the administrator can be subsequently requested to run the root scripts.

Once you choose the desired option, press Next to continue to the next page.

Selecting an Advanced or Typical Installation Type

Now you can choose your installation type. Figure 7-2 shows the two choices that are available: Typical and Advanced. Both the Typical Installation and the Advanced Installation using manual configuration require the same amount of preparation prior to installation.

Choosing an installation type

Figure 7.2. Choosing an installation type

In all cases, we recommend that a DNS server be available either within the enterprise or within the local network. Advanced Installation using the automatic configuration also requires that a DHCP server be similarly available.

In the next example, we will perform an Advanced Installation using manual configuration (without GNS). Be sure to select that option, and then press Next to continue to language section.

Selecting a Language

The next step in the install process is to choose your language. Oracle supports a wide number of languages, as shown in Figure 7-3.

Grid infrastructure language choices

Figure 7.3. Grid infrastructure language choices

On the Language Selection page, (American) English will be preselected. If you wish error messages to be reported in other languages, then add these languages using the arrow buttons. We recommend that you do not attempt to remove American English.

Next, select any additional languages you want to use and press Next to continue to the Grid Plug and Play Information page.

Configuring the Grid Plug and Play

The Grid Plug and Play (GPnP) Information page requires some mandatory information about the cluster (see Figure 7-4). Take care when choosing a Cluster Name because it is very difficult to change this name later.

Configuring Plug and Play settings

Figure 7.4. Configuring Plug and Play settings

The Single Client Access Name (SCAN) is a new feature in Oracle 11.2. The SCAN name effectively provides a network alias for the cluster. It replaces individual node names in the connection string and allows clients to connect to the cluster regardless of which nodes it is running on. The SCAN address is accessed via a SCAN VIP that typically runs on three nodes in the cluster. If less than three nodes are available, then multiple SCAN VIPs may be running on the same node. A dedicated SCAN listener process runs on each node that has a SCAN VIP. The SCAN listener is responsible for load balancing across the cluster, and the connection is forwarded to a local listener process running on one of the nodes currently in the cluster.

You will need to assign SCAN addresses in DNS prior to installing the Grid Infrastructure. Oracle recommends that three SCAN addresses be configured using addresses from the public network range.

In Figure 7-4, the SCAN name is cluster1-scan.example.com. The SCAN name generally takes this format: <cluster-name>-scan.<domain>. However, the precise format of the SCAN address appears to be an evolving naming convention, rather than a mandatory requirement.

The Grid Naming Service (GNS) is not required for the manual installation. Therefore, the GNS check box should not be checked.

Press Next to continue to the Cluster Node Information page.

Configuring the Cluster Node Information Page

On the Cluster Node Information page (Figure 7-5), enter the hostname and virtual IP name of each node in the cluster. You can either enter names individually using the Add button or you can specify a cluster configuration file that contains the names for all nodes.

Configuring cluster node information

Figure 7.5. Configuring cluster node information

We recommended creating a cluster configuration file if there are more than two nodes. The format of the file looks like this:

<cluster_name>
<node_name> <vip_name>
[<node_name> <vip_name>]

A configured file might look like this:

cluster1
london1.example.com london1-vip.example.com
london2.example.com london2-vip.example.com
london3.example.com london3-vip.example.com
london4.example.com london4-vip.example.com

We named this file cluster1.ccf and stored it in /home/oracle, so that it will not be deleted in the event of a cleanup operation following a failed Grid Infrastructure installation.

The Cluster Node Information page also allows you to set up and test Secure Shell (SSH) connectivity between all the nodes in the cluster. If you do not need to customize your SSH configuration, then it is much simpler and quicker to allow Oracle to perform the SSH standard configuration at this point, especially if you have more than two nodes. We recommend that you allow the installer to configure SSH automatically at this stage, regardless of the installation type. Whether you configure SSH manually or automatically, the installer will test the SSH configuration at this stage to ensure there are no errors:

Press Next to continue to the Network Interface Usage page.

Configuring the Network Interface Usage Page

The next page you need to configure is the Network Interface Usage page (see Figure 7-6). You use this page to specify which interfaces should be used for the public and private networks.

Configuring network interface usage

Figure 7.6. Configuring network interface usage

In Oracle 11.2, the installer makes some reasonable guesses at the proper settings to use. In the example shown in Figure 7-6, the public network will use the eth0 interface and the 172.17.1.0 subnet, and the private network will use the eth1 interface and the 192.168.1.0. subnet.

You can also specify which networks should not be used directly by Clusterware and RAC. For example, these might include storage, backup, or management networks. In the preceding example, the eth2 interface that uses the 192.168.2.0 subnet will not be used by Oracle Clusterware.

Press Next to continue to the Storage Option Information page:

Configuring the Storage Option Information Page

Your next step is to configure the Storage Option Information page (see Figure 7-7). In Oracle 11.2, you can place your Clusterware files (OCR and Voting disk) either in ASM storage or on a shared file system. In this version, you cannot create new Clusterware files on raw or block devices. However, you can upgrade existing Clusterware files on raw or block devices.

Specifying a storage option

Figure 7.7. Specifying a storage option

If you specify ASM Storage in the Storage Option Information page shown in Figure 7-7 the LUNs must be provisioned and accessible with the correct permissions. ASM instances and ASM disk groups will be created later in the Grid Infrastructure installation process. If you specify Shared File System then a suitable cluster file system such as OCFS2 or a supported version of NFS must have already been installed.

Unless you have NAS-based storage, we strongly recommend using ASM to provide shared storage for RAC clusters. We also recommend placing the OCR and Voting disk in the ASM storage for newly created clusters.

Select the appropriate option and press Next to continue. If you have chosen ASM, then the Create ASM Disk Group page will be displayed.

Creating an ASM Disk Group

On the ASM Disk Group page (see shown in Figure 7-8), you can create an ASM disk group that will include the OCR and Voting disk. Additional ASM disk groups can be created later in the installation process.

Creating an ASM Disk Group

Figure 7.8. Creating an ASM Disk Group

If no candidate disks are displayed, then click Change Discovery Path... and modify the path.

Next, specify an ASM Disk Group name and the level of redundancy. If you select External Redundancy, you will need to specify one disk; if you select Normal Redundancy, you will need to specify three disks; and if you specify High Redundancy, you will need to specify five disks. The reason for the differing disk specification is that with Normal redundancy, three Voting disks will be created in the header of the ASM disks, so they will require three disks. In the case of High Redundancy, five Voting disks are created, so you will need five disks.

When you have specified the details for the ASM Disk Group, press Next to continue to the ASM Password page.

Specifying an ASM Password

The ASM Password page lets you specify passwords for the ASM SYS user and the ASMSNMP user (see Figure 7-9). The SYS user is used to administer ASM using the SYSASM privilege that was introduced in Oracle 11.1.

Specifying an ASM password

Figure 7.9. Specifying an ASM password

The ASMSNMP user is new to Oracle 11.2, and it includes SYSDBA privileges to monitor ASM instances. You can optionally use the same password for both accounts; however, we recommend that you use different passwords for each account.

Press Next to continue to the Failure Isolation Support page:

Specifying a Username and Password for IPMI

The Failure Isolation Support page lets you specify a username and password for the Intelligent Platform Management Interface (IPMI), as shown in Figure 7-10.

Failure Isolation Support

Figure 7.10. Failure Isolation Support

If you wish to use IPMI, you must ensure that you have appropriate hardware. You must also ensure that IPMI drivers are installed and enabled prior to starting the Grid Infrastructure installation process, as described in Chapter 6.

Press Next to continue to the Privileged Operating System Groups page.

Configuring Privileged Operating System Groups

The Privileged Operating System Groups page allows you to associate certain key Oracle groups with groups that you've defined at the operating-system level (see Figure 7-11).

Creating associations for Operating System Groups

Figure 7.11. Creating associations for Operating System Groups

On the Privileged Operating System Groups page, you can specify operating system groups for the following Oracle groups:

  • ASM Database Administrator (OSDBA)

  • ASM Instance Operator (OSOPER)

  • ASM Instance Administrator (OSASM)

Although different operating system groups can be specified for each Oracle group, the default operating system group (oinstall) is sufficient for most installations.

Press Next to continue. If you accepted the default values, you will receive the following warning:

INS-41813 OSDBA, OSOPER and OSADM are the same OS group.

Click Yes to ignore the warning and continue to the Installation Location page.

Setting the Installation Location

The Installation Location page lets you specify values for the Oracle base location ($ORACLE_BASE) and the Oracle Grid Infrastructure home location (see Figure 7-12). In Oracle 10g and above, Oracle recommends that you not locate the Oracle Grid Infrastructure home (Clusterware home) in a directory below the $ORACLE_BASE directory. This is mainly recommended for security reasons because the Grid Infrastructure home contains directories where scripts can be placed that are automatically executed with root privileges.

Specifying the installation location

Figure 7.12. Specifying the installation location

In Oracle 11.2 and above, an out-of-place upgrade is used for the Grid Infrastructure. This means that a new Grid Infrastructure home is created for the upgrade software. Subsequently, Clusterware and ASM are switched over to use the new home. Therefore, in Oracle 11.2 and later, we recommend that you include the version number with the home directory path, so that two Grid Infrastructure homes can exist concurrently on the same node during the upgrade.

Press Next to continue to the Inventory page.

Specify the Central Inventory's Location

If you are making the first Oracle installation on the cluster, the Create Inventory page will be displayed (see Figure 7-13). This page lets you specify the location of the central inventory. A copy of the central inventory will be created and maintained in the same location on each node in the cluster.

Creating the Inventory location

Figure 7.13. Creating the Inventory location

You can also specify the operating system group that will own the inventory, which defaults to oinstall. Members of this group can write to the inventory.

Press Next to continue to the Prerequisite Checks page.

Performing Prerequisite Checks

In Oracle 11.2, the Prerequisite Checks page (see Figure 7-14) has been enhanced to perform many of the checks previously performed only by the Cluster Verification Utility (CLUVFY).

If you have followed the Linux operating system installation and configuration steps covered in Chapter 6, then there should be no reported errors at this screen. However, in Figure 7-14 we deliberately did not configure kernel parameters on any nodes in the cluster. We chose to do this to illustrate a new feature of the OUI to identify and repair operating system configuration errors before proceeding with the Grid Infrastructure software installation. If you are satisfied that you have installed and configured Linux correctly and that you have no errors to repair, then you may proceed directly to the Summary page.

Performing Prerequisite Checks

Figure 7.14. Performing Prerequisite Checks

If any prerequisite checks fail, then error messages will be displayed. In some circumstances, the page will generate scripts to fix the errors.

Identifying Typical Errors

Typical configuration errors identified by the prerequisite checks at this stage include the following:

  • Insufficient swap space

  • Incorrect run level

  • Inadequate shell limits

  • Inadequate kernel parameters

Getting More Detail

On the Prerequisite Checks page, the installer displays a list of errors. You can click on each error to get additional details. For example, if you click the failure message for the rmem_default kernel parameter, you'll see the additional information shown in Figure 7-15. Notice particularly that this page shows the cause of the error and recommended actions.

Viewing error messages, including causes and recommended actions

Figure 7.15. Viewing error messages, including causes and recommended actions

This message in Figure 7-15 shows that the rmem_default parameter was still at the default value of 109568 on each node in the cluster. The expected value is 262144 on each node in the cluster. The message is repeated for each node on which errors exist.

Note

It is possible to select an error and press Fix and Check Again to resolve a number of errors, including kernel parameter errors.

Fixup Scripts

Depending on the nature of an error, the installer may generate a script to fix it. If you get such a script, you should run it as the root user before returning to the Prerequisite Checks page. If you cannot log in as the root user, then you should work with someone who can.

Fixup scripts can be generated for kernel parameter errors and user limits errors. In this release, fixup scripts cannot be generated for errors such as missing packages (RPMs) or insufficient swap space.

Figure 7-16 shows an example where the installer is telling you that you have an automatically generated script to run. Notice the instructions for running the script. Also notice that the installer tells you the nodes on which the script should be run.

Error resolution scripts

Figure 7.16. Error resolution scripts

Figure 7-16 also mentions a directory named /tmp/CVU_11.2.0.1.0. Fixup scripts are generated in directories created with this naming pattern:

/tmp/CVU_<release>_oracle

The installer will create that directory on each node in the cluster during the Grid Infrastructure installation process. Any fixup scripts will be written to that directory. Another script named runfixup.sh is created to run all the individual scripts.

Scripts generated include the following:

  • runfixup.sh: This shell script creates a log file (orarun.log) and executes orarun.sh.

  • orarun.sh: This shell script contains the functionality to perform all supported fixes.

  • fixup.enable: This file specifies which types of fix should be applied.

  • fixup.response: This file contains a set of parameters as name-value pairs.

Anatomy of Fixup Scripts

The orarun.sh script contains the functionality to perform all the supported fixes. It takes two parameters: the name of the response file (fixup.response) and the name of the enable file (fixup.enable).

Not all configuration errors can be fixed automatically in the first release. Currently, fixup scripts can be generated for the following kernel parameters:

  • SHMMAX: kernel.shmmax

  • SHMMNI: kernel.shmmni

  • SHMALL: kernel.shmall

  • SEMMSL: kernel.sem.semmsl

  • SEMMNS: kernel.sem.semmns

  • SEMOPN: kernel.sem.semopm

  • SEMMNI: kernel.sem.semmni

  • FILE_MAX_KERNEL: fs.filemax

  • IP_LOCAL_PORT_RANGE: net.ipv4.ip_local_port_range

  • RMEM_DEFAULT: net.core.rmem_default

  • RMEM_MAX: net.core.rmem_max

  • WMEM_DEFAULT: net.core.wmem_default

  • WMEM_MAX: net.core.wmem_max

  • AIO_MAX_SIZE: fs.aio-max-size

Fixup scripts can also be generated for the following shell limits:

  • MAX_PROCESSES_HARDLIMIIT: hard nproc

  • MAX_PROCESSES_SOFTLIMIIT: soft nproc

  • FILE_OPEN_MAX_HARDLIMIT: hard nofile

  • FILE_OPEN_MAX_SOFTLIMIT: soft nofile

  • MAX_STACK_SOFTLIMIT: soft stack

The fixup.enable file is generated by the OUI, and it lists the types of fixes that should be applied by orarun.sh, as in this example:

SET_KERNEL_PARAMETERS="true"

The fixup.response file is generated by the OUI, and it contains a set of parameters listed as name-value pairs, as in this example:

FILE_OPEN_MAX_HARDLIMIT="65536"
INSTALL_USER="oracle"
SYSCTL_LOC="/sbin/sysctl"
SEMOPM="100"
IP_LOCAL_PORT_RANGE="9000 65000"
RMEM_DEFAULT="262144"
RMEM_MAX="4194304"
WMEM_DEFAULT="262144"
WMEM_MAX="1048576"
AIO_MAX_NR="1048576"

Note that fixup.response includes more than just kernel parameter values; it also includes security and other parameters required by orarun.sh.

Addressing Failed Checks

We recommend that you address all failed prerequisite checks to ensure that any errors occurring during the installation process are genuine problems. However, you can choose to ignore some errors. For example, if your cluster does not need to synchronize time with external servers, you may decide not to configure NTP, in which case Clusterware will perform time synchronization internally using CTSS. In such cases, you can click Ignore All on the Prerequisites Checks page to acknowledge the warning, as well as to instruct the installer that you are prepared to continue.

The following example shows the output generated by running /tmp/CVU_11.2.0.1.0_oracle/runfixup.sh on the first node:

[root@london1 oracle]# /tmp/CVU_11.2.0.1.0_oracle/runfixup.sh
Response file being used is   :/tmp/CVU_11.2.0.1.0_oracle/fixup.response
Enable file being used is   :/tmp/CVU_11.2.0.1.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.1.0_oracle/orarun.log
Setting Kernel Parameters...
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576

The installer will repeat the prerequisite checks, and if necessary, request that further errors be fixed. Although it is possible to ignore any warnings, we recommend that you attempt to resolve all errors identified during the prerequisite checks before continuing with the installation. Most errors can be resolved dynamically without exiting the installer.

When all prerequisite checks have succeeded, or you have indicated that you wish to ignore any failed checks, then the Summary page will be displayed:

Reviewing the Summary Page

The Summary page lets you review the installation you are about to unleash upon your cluster (Figure 7-17). You'll see the choices you've made so far, as well as some of the ramifications of those choices.

The installation summary

Figure 7.17. The installation summary

The Summary page saves a copy of the response file, if required. The response file can be used as input for silent installations.

Press Finish to start the installation, and the Setup page will be displayed.

Setup Page

The Setup page shows your progress through the following:

  • Install Grid Infrastructure

    • Prepare

    • Copy files

    • Link binaries

    • Setup files

    • Perform remote operations

  • Execute Root Scripts for Installing Grid Infrastructure

The installer will copy the software and supporting files from the staging area to the Oracle Grid Infrastructure home on the local cluster node. It will then replicate the Grid Infrastructure home from the local node to all remote nodes.

Reviewing Execute Configuration Scripts

When the remote copy is complete, the Execute Configuration scripts page will be displayed (see Figure 7-18 for an example of that page). This page will present you with a list of scripts that you will need to run while logged in as the root user. You may need to work with your system administrator to get the scripts run.

Reviewing post-install configuration scripts

Figure 7.18. Reviewing post-install configuration scripts

Execution Order

The scripts should be executed in the order that the servers are listed. For example, Table 7-7 shows the correct order of execution given the data in Figure 7-18.

Table 7.7. Reviwing the Execution Order for the Post-Install Scripts Listed in Figure 7-18

Hostname

Script Name

london1

/u01/app/oraInventory/orainstRoot.sh

london2

/u01/app/oraInventory/orainstRoot.sh

london3

/u01/app/oraInventory/orainstRoot.sh

london4

/u01/app/oraInventory/orainstRoot.sh

london1

/u01/app/11.2.0/grid/root.sh

london2

/u01/app/11.2.0/grid/root.sh

london3

/u01/app/11.2.0/grid/root.sh

london4

/u01/app/11.2.0/grid/root.sh

It is particularly important that the root.sh scripts be executed in the specified order because the final script (london4, in this case) performs additional configuration after the OCR has been initialized on all nodes in the cluster.

Running the orainstRoot.sh Script

The orainstRoot.sh script initializes the Oracle inventory on the local node. It performs the following actions:

  • Creates the directory (e.g., /u01/app/oraInventory)

  • Creates the subdirectories

  • Creates /etc/oraInst.loc, which contains the location of the Oracle inventory

  • Sets the owner and group permission for the inventory files.

A sample orainstRoot.sh script looks like this:

[root@london1 oracle]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

Executing the root.sh script

The root.sh script configures the Clusterware daemons on each node.

On the first node, root.sh initializes the OCR. On the remaining nodes, root.sh adds the new node to the OCR. On the final node, root.sh also configures the VIP addresses for all nodes in the OCR. It is essential that root.sh be run successfully on the final node after it has been executed on all other nodes.

On the first node in the cluster, the output of root.sh looks like this:

[root@london1 ˜]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory:   [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2009-10-23 08:31:15: Parsing the host name
2009-10-23 08:31:15: Checking for super user privileges
2009-10-23 08:31:15: User has super user privileges
Using configuration parameter file:   /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'london1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'london1'
CRS-2676: Start of 'ora.gipcd' on 'london1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'london1'
CRS-2676: Start of 'ora.gpnpd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'london1'
CRS-2676: Start of 'ora.cssdmonitor' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'london1'
CRS-2672: Attempting to start 'ora.diskmon' on 'london1'
CRS-2676: Start of 'ora.diskmon' on 'london1' succeeded
CRS-2676: Start of 'ora.cssd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'london1'
CRS-2676: Start of 'ora.ctssd' on 'london1' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'london1'
CRS-2676: Start of 'ora.crsd' on 'london1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk   9e0f1017c4814f08bf8d6dc1d35c6797.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk   group
--  -----    -----------------                ---------   ---------
 1. ONLINE   9e0f1017c4814f08bf8d6dc1d35c6797 (ORCL:VOL1)   [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'london1'
CRS-2677: Stop of 'ora.crsd' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'london1'
CRS-2677: Stop of 'ora.asm' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'london1'
CRS-2677: Stop of 'ora.ctssd' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'london1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'london1'
CRS-2677: Stop of 'ora.cssd' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'london1'
CRS-2677: Stop of 'ora.gpnpd' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'london1'
CRS-2677: Stop of 'ora.gipcd' on 'london1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'london1'
CRS-2677: Stop of 'ora.mdnsd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'london1'
CRS-2676: Start of 'ora.mdnsd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'london1'
CRS-2676: Start of 'ora.gipcd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'london1'
CRS-2676: Start of 'ora.gpnpd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'london1'
CRS-2676: Start of 'ora.cssdmonitor' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'london1'
CRS-2672: Attempting to start 'ora.diskmon' on 'london1'
CRS-2676: Start of 'ora.diskmon' on 'london1' succeeded
CRS-2676: Start of 'ora.cssd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'london1'
CRS-2676: Start of 'ora.ctssd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'london1'
CRS-2676: Start of 'ora.asm' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'london1'
CRS-2676: Start of 'ora.crsd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'london1'
CRS-2676: Start of 'ora.evmd' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'london1'
CRS-2676: Start of 'ora.asm' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'london1'
CRS-2676: Start of 'ora.DATA.dg' on 'london1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'london1'
CRS-2676: Start of 'ora.registry.acfs' on 'london1' succeeded

london1     2009/10/23 08:38:45
/u01/app/11.2.0/grid/cdata/london1/backup_20091023_083845.olr
Configure Oracle Grid Infrastructure for a Cluster ...   succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095   MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

On the final node in the cluster, the output of root.sh should look something like this:

[root@london4 oraInventory]#  /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory:   [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2009-10-23 08:53:10: Parsing the host name
2009-10-23 08:53:10: Checking for super user privileges
2009-10-23 08:53:10: User has super user privileges
Using configuration parameter file:   /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but   found an active CSS daemon on node london1, number 1, and is terminating
An active cluster was found during exclusive startup,   restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'london4'
CRS-2676: Start of 'ora.mdnsd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'london4'
CRS-2676: Start of 'ora.gipcd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'london4'
CRS-2676: Start of 'ora.gpnpd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'london4'
CRS-2676: Start of 'ora.cssdmonitor' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'london4'
CRS-2672: Attempting to start 'ora.diskmon' on 'london4'
CRS-2676: Start of 'ora.diskmon' on 'london4' succeeded
CRS-2676: Start of 'ora.cssd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'london4'
CRS-2676: Start of 'ora.ctssd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'london4'
CRS-2676: Start of 'ora.drivers.acfs' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'london4'
CRS-2676: Start of 'ora.asm' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'london4'
CRS-2676: Start of 'ora.crsd' on 'london4' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'london4'
CRS-2676: Start of 'ora.evmd' on 'london4' succeeded

london4     2009/10/23 08:57:40
/u01/app/11.2.0/grid/cdata/london4/backup_20091023_085740.olr
Configure Oracle Grid Infrastructure for a Cluster ...   succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095   MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory

When the root.sh script has executed successfully on all four nodes in the cluster, press OK in the Execute Configuration Scripts page. The installer will return to the Setup page.

Monitoring Configuration Assistants

The Configuration Assistant page allows you to monitor the progress of the individual Configuration Assistants that are fired off by the installation (see Figure 7-19).

Executing the Configuration Assistant

Figure 7.19. Executing the Configuration Assistant

The example installation shown in Figure 7-19 will continue as follows:

Configure Oracle Grid Infrastructure for a Cluster, which includes the following assistants:

  • Oracle Net Configuration Assistant

  • Automatic Storage Management Configuration Assistant

  • Oracle Notification Server Configuration Assistant

  • Oracle Private Interconnect Configuration Assistant

  • Oracle Cluster Verification Utility.

For default installations, the Configuration Assistants should all execute silently. For non-default installations, the Configuration Assistants may display additional GUI windows to obtain further information.

When all of the Configuration Assistants are complete, the End of Installation page will be displayed. Press Close to terminate the installation.

Implementing an Advanced Installation for Automatic Configuration

In this section, we'll walk you through an example that shows how to implement an automatic configuration. We will describe an Advanced Installation of Grid Infrastructure using automatic configuration based on the Grid Naming Service (GNS). Again, we have assumed that DNS is installed and configured as detailed in Chapter 6; however, this example includes the actual network configuration used for the installation of a four-node cluster.

Configuring a Network Configuration

The first step is to configure the network infrastructure. Values are configured in DNS, DHCP, and /etc/hosts.

Tables 7-8 through 7-13 show the values that we used in our test environment; these are the same values we used to generate the subsequent screenshots and example code.

Table 7.8. The DNS settings

Domain Name

example.com

Server Name

dns1.example.com

Server Address

172.17.1.1

Table 7.9. The DHCP settings

Server address

172.17.1.1

Address range

172.17.1.201 - 172.17.1.220

Table 7.10. The DNS Subdomain Settings

Sub domain

grid1.example.com

Host name

cluster1-gns.grid1.example.com

VIP address

172.17.1.200

Table 7.11. The SCAN Listeners

SCAN Name

cluster1-scan.grid1.example.com

SCAN Addresses

Assigned by DHCP

Table 7.12. The Public Network Configuration

Network

172.17.0.0

Gateway

172.17.0.254

Broadcast address

172.17.255.255

Subnet mask

255.255.0.0

Node names

london1, london2, london3, london4

Public IP address

172.17.1.101,172.17.1.102,172.17.1.103,172.17.1.104

VIP names

Assigned by GNS

VIP addresses

Assigned by DHCP

Table 7.13. The Private Network Configuration

Network

192.168.1.0

Subnet mask

255.255.255.0

Names

london1-priv london2-priv london3-priv london4-priv

IP addresses

192.168.1.1,192.168.1.2,192.168.1.3, 192.168.1.4

Configuring DNS

Next, you need to verify that the bind package has been installed and is running. If the bind package is not available, then see Chapter 6 for details on installing and configuring the bind package.

When performing the manual configuration, /etc/named.conf is configured as detailed in Chapter 6.

Forward lookups for example.com are configured in master.example.com as follows:

$TTL 86400
   @             IN SOA dns1.example.com. root.localhost (
                          2010063000          ; serial
                          28800               ; refresh
                          14400               ; retry
                          3600000             ; expiry
                          86400 )             ; minimum
   @             IN NS  dns1.example.com.
   localhost     IN A   127.0.0.1
   dns1          IN A   172.17.1.1
   london1       IN A   172.17.1.101
   london2       IN A   172.17.1.102
   london3       IN A   172.17.1.103
   london4       IN A   172.17.1.104
   $ORIGIN grid1.example.com.
   @             IN NS  cluster1-gns.grid1.example.com.
                 IN NS  dns1.example.com.
   cluster1-gns  IN A   172.17.1.200; glue record

For the automatic configuration, it is necessary to specify public addresses for each node in the cluster. It is also necessary to create a delegated subdomain (e.g., grid1.example.com), which is managed by the Grid Naming Service (GNS). There are several alternative syntaxes you can use to create a delegated subdomain; we have found the syntax shown to be the most effective.

Note that GNS is a cluster resource that requires its own VIP. The VIP is allocated within DNS. At any one time, GNS will only be active on one node in the cluster. If this node is shutdown, then Oracle Clusterware will automatically relocate the GNS VIP to one of the remaining nodes in the cluster.

In the automatic configuration, VIP addresses and SCAN addresses are allocated using GNS and DHCP. Therefore, it is not necessary to specify these addresses in DNS. Reverse lookups for example.com can be configured in 172.17.1.rev, as in this example:

$TTL 86400
@    IN SOA dns1.example.com. root.localhost. (
                       2010063000          ; serial
                       28800               ; refresh
                       14400               ; retry
                       3600000             ; expiry
                       86400 )             ; minimum
        IN NS  dns1.example.com.
1       IN PTR dns1.example.com.
101     IN PTR london1.example.com.
102     IN PTR london2.example.com.
103     IN PTR london3.example.com.
104     IN PTR london4.example.com.

In the automatic configuration, VIP addresses are allocated using GNS and DHCP. Thus it is no longer necessary to specify reverse lookups for these addresses in DNS.

As with the manual configuration, forward and reverse lookups for the local domain must be configured in DNS (you can see how to configure these in Chapter 6).

On each node in the cluster, the /etc/resolv.conf is configured as follows:

search example.com grid1.example.com
nameserver 172.17.1.1
options attempts:2
options timeout:1

Note that you must also include the GNS subdomain in the search path, as covered in Chapter 6. Theoretically, GNS can also allocate addresses for the private network. However, in our opinion, it is advisable to continue to configure these addresses manually in /etc/hosts and/or DNS. Using manually configured private addresses allows you to maintain some mapping between public addresses, private addresses, and node numbers, which simplifies troubleshooting.

In this example, as with the manual configuration, we configured the private network addresses in /etc/hosts on each node:

192.168.1.1        london1-priv.example.com london1-priv
192.168.1.2        london2-priv.example.com london2-priv
192.168.1.3        london3-priv.example.com london3-priv
192.168.1.4        london4-priv.example.com london4-priv

The priv extension is not mandatory, but it has become a convention for RAC deployments.

Configuring DHCP

Next, you need to verify that the dhcp package has been installed and is running. If the dhcp package is not available, then see Chapter 6 for information on how to install and configure it. For our installation, we have configured DHCP to assign a range of 20 addresses that DHCP can allocate (172.17.1.201 to 172.17.1.220).

Setting up the Grid Plug and Play Information Page

If the Grid Infrastructure software is to be owned by a dedicated grid user, then start the OUI with that user. In our example, we will use the oracle user. As the oracle user, start the OUI using the following command:

[oracle@london1]$ /home/oracle/stage/grid/runInstaller

Note that the installer is written in Java, and it takes a couple of minutes to load. The installation procedure that uses the automatic configuration is similar to the procedure that uses the manual configuration—with a handful of exceptions. In the following pages, we will only describe those pages that are significantly different.

The first installer page of significance is the Grid Plug and Play Information Page (see Figure 7-20).

The Grid Plug and Play Information page

Figure 7.20. The Grid Plug and Play Information page

Several fields on the Grid Plug and Play Information page differ from the manual and automatic configuration. Unfortunately, in this release the fields are not displayed in an entirely intuitive order.

To use GNS, you should ensure that the Configure GNS box is checked. This will allow you to specify values for the remaining fields. The GNS sub domain is the name of the domain that will be delegated by DNS (e.g., grid1.example.com).

The GNS VIP address is single VIP address that is allocated to GNS. This address should be added to the DNS configuration.

In the example, we have also specified a SCAN name within the GNS sub domain (cluster1-scan.grid1.example.com). This is necessary because it allows SCAN VIP addresses to be allocated automatically by DHCP within the GNS subdomain.

Configuring the Cluster Node Information Page

The next page of note is the Cluster Node Information page (see Figure 7-21).

The Cluster Node Information page

Figure 7.21. The Cluster Node Information page

For automatic configuration, it is only necessary to specify a list of host names on the Cluster Node Information page. The VIP addresses can be omitted because these will be allocated automatically by GNS.

If you have a large number of nodes or very long domain names, you can specify a list of nodes in a cluster configuration file. For the automatic configuration option, the syntax looks like this:

<cluster_name>
<node_name>
[<node_name>]

The following example shows four nodes in a cluster:

cluster1
london1.example.com
london2.example.com
london3.example.com
london4.example.com

The SSH connectivity configuration and testing functionality is identical to that for the Grid Infrastructure installation with manual configuration.

The Summary Page

After making all your choices, you'll come to the Summary Page (see Figure 7-22). This is where you can review your choices before committing to the installation.

The Installation summary

Figure 7.22. The Installation summary

The options we've discussed should appear in the Summary page for you to review. Options you'll see detailed on this page include the GNS Sub Domain, the GNS VIP Address, and the SCAN Name.

The remainder of the Grid Infrastructure installation with automatic configuration is identical to the Grid Infrastructure installation with manual configuration.

Typical Installation

Grid Infrastructure offers either a Typical Installation or an Advanced Installation. In reality, there is little difference between the Typical Installation and the Advanced Installation with manual configuration. Troubleshooting the Advanced Installation is simpler, so we recommend that you attempt that type of installation initially. When you are confident that your environment is correctly configured, then you may also attempt the Typical Installation.

Preparation for the Typical Installation is identical to the preparation required for the Advanced Installation with manual configuration. In this section, we discuss areas where the Typical Installation differs from the Advanced Installation with manual configuration.

Note

It is not possible to configure the Grid Naming Service using the Typical Installation.

Choosing the Installation Type

The Installation Type Page lets you choose whether to make a Typical or Advanced Installation (see Figure 7-23). Select Typical Installation, and then click Next to continue.

The Advanced Installation choice

Figure 7.23. The Advanced Installation choice

Specifying the Cluster Configuration Page

You enter the SCAN name on the Cluster Configuration page (see Figure 7-24). The cluster name is automatically derived from the SCAN name, so it is not necessary to enter a cluster name.

The Cluster Configuration page

Figure 7.24. The Cluster Configuration page

The Typical Installation requires a manual configuration, so it is necessary to enter both the node name and VIP name for each node in the cluster. The SSH configuration is identical to that used in the Advanced Installation.

When you're done filling out the page, it is good practice to confirm that the network interfaces have been correctly assigned by the installer. Do that by clicking the Identify Network Interfaces button.

Install Locations Page

The Install Locations page (see Figure 7-25) is completely different for the Typical Installation. This page allows you to enter the following fields:

  • Oracle Base location

  • Software Location

  • Cluster Registry Storage Type

  • Cluster Registry Location (Cluster File System storage only)

  • SYSASM Password (ASM storage only)

  • OSASM Group

The Install Locations page

Figure 7.25. The Install Locations page

Reviewing the Summary Page for a Typical Installation

The Summary page shows the values entered during the interview session, along with the default values generated by the installer (see Figure 7-26). Check that the generated values are acceptable before continuing with the installation. If they are not acceptable, then cancel the installation and perform an Advanced Installation instead.

The remainder of the Typical Installation is identical to the Advanced Installation.

The Summary page for a Typical Installation

Figure 7.26. The Summary page for a Typical Installation

Installing a Standalone Server

It is also possible to install Grid Infrastructure on a standalone server. In Oracle 10g and Oracle 11g R1, it was possible to install ASM for a single instance database. In that case, the ASM instance would also include a cut down version of Cluster Synchronization Services (CSS), which ran in local mode and was configured in a local copy of the OCR.

In Oracle 11g R2, it is no longer possible to install Oracle Clusterware and ASM separately, so a complete Grid Infrastructure installation is required to run ASM for a single-instance database. However, both Oracle Clusterware and ASM are restricted to run on a single instance; this simplifies the preparation and installation process.

It is not possible to configure GNS with a Grid Infrastructure standalone server installation. It is not necessary to configure cluster names, SCAN names, SCAN VIPs, private network addresses, or public VIPs for a standalone installation.

In the remainder of this section, we will discuss some of the significant differences between the Grid Infrastructure installation for a standalone server and the Advanced Installation for a cluster.

Selecting an Installation Option

The Installation Option page lets you select the Install and Configure Grid Infrastructure option for a Standalone Server (see Figure 7-27).

Note

It is not possible to convert a standalone installation into a cluster; if you wish to perform this operation, you will need to deinstall the node and then reinstall Grid Infrastructure for a cluster.

The Grid Infrastructure installation options

Figure 7.27. The Grid Infrastructure installation options

Creating an ASM Disk Group Page

ASM is mandatory for the Standalone Server installation. Figure 7-28 shows the configuration page. You will need to specify an initial ASM disk group at this stage. You can use the ASM Configuration Assistant (ASMCA) or SQL*Plus to add more disk groups later.

Configuring ASM configuration for a standalone installation

Figure 7.28. Configuring ASM configuration for a standalone installation

Reviewing the Summary Page for a Standalone Installation

The Summary page for a standalone installation shows the options specified during the interview process (see Figure 7-29). Press Finish to start the installation.

The Standalone Installation Summary page

Figure 7.29. The Standalone Installation Summary page

Once, you press Finish, the Setup page will be displayed, and the following tasks will be performed:

  1. Preparation

  2. Copy Files

  3. Link Binaries

  4. Setup Files

Configuring the Execute Configuration Scripts

The Execute Configuration scripts dialog box lists one or more scripts that should be executed by the root user on the server (see Figure 7-30). You may need to work with your system administrator to execute those scripts.

Configuration scripts for a standalone installation

Figure 7.30. Configuration scripts for a standalone installation

If an Oracle inventory does not already exist, you will be asked to run the inventory creation script (orainstRoot.sh). The following example shows the results of running this script against the sample:

[root@london4]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory
Adding read,write permissions for group,Removing   read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete

The root.sh script configures Oracle Clusterware, initializes the OCR, and adds the OHASD daemon to /etc/inittab, as in this example:

[root@london4]# /u01/app/oracle/product/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/grid

Enter the full pathname of the local bin directory:   [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.    Overwrite it? (y/n)
[n]:

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2009-08-02 12:41:11: Checking for super user privileges
2009-08-02 12:41:11: User has super user privileges
2009-08-02 12:41:11: Parsing the host name
Using configuration parameter file:
/u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node london1 successfully pinned.
ohasd is starting
Adding daemon to inittab
2009-08-02 12:41:23
Changing directory to   /u01/app/oracle/product/11.2.0/grid/log/server17/ohasd
Successfully configured Oracle Grid Infrastructure for a   Standalone

As you can see, the standalone server configuration for Oracle Clusterware is much simpler than its cluster equivalent.

Deinstalling the Grid Infrastructure Software

If you wish to deinstall the Grid Infrastructure software, you should be aware that there is no provision to deinstall software within the OUI, nor can you cannot reinstall over an existing installation. Therefore, you should not delete directories manually. Instead, there is a deinstall script within the deinstall directory. You may run this to deinstall the Grid Infrastructure software either on the cluster as a whole or on the individual nodes in isolation. The deinstall script runs a check operation, and you are prompted to confirm the deinstallation before the script performs a clean operation, as shown in this example:

[oracle@london1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2010-06-15_12-18-38-PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################## CHECK OPERATION START ########################
Install check configuration START


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: london1,london2,london3,london4

Install check configuration END
...
Do you want to continue (y - yes, n - no)? [n]: y

The clean operation prompts you to run a script as the root user on the nodes as shown. It then returns to press Enter on the deinstall script:

root@london2 ˜]# /tmp/deinstall2010-06-15_12-18-38-PM/perl/bin/perl -I/tmp/deinstall2010-06-15_12-18-38-PM/perl/lib -I/tmp/deinstall2010-06-15_12-18-38-PM/crs/install /tmp/deinstall2010-06-15_12-18-38-PM/crs/install/rootcrs.pl -force  -delete -paramfile /tmp/deinstall2010-06-15_12-18-38-PM/response/deinstall_Ora11g_gridinfrahome1.rsp -lastnode
2010-06-15 12:22:39: Parsing the host name
2010-06-15 12:22:39: Checking for super user privileges
2010-06-15 12:22:39: User has super user privileges
Using configuration parameter file: /tmp/deinstall2010-06-15_12-18-38-PM/response/deinstall_Ora11g_gridinfrahome1.rsp
...
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'london2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

After the deinstallation process is complete, you may reinstall the Grid Infrastructure software, if desired.

Summary

In this chapter, we have discussed the four main installation procedures for the Grid Infrastructure in Oracle 11gR2. This release includes both Oracle Clusterware and ASM. In the next chapter, we will describe how to install the RDBMS software.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset