CHAPTER 5

image

Installing Oracle Linux

In this chapter you are going to learn how to install and configure Oracle Linux 6 for use with Oracle database release 12c. Oracle Linux 6 has been selected because its installable DVD and CD-ROM images are still freely available. In this respect, Oracle Linux is probably the most accessible supported Linux for use with any Oracle database product. In fact, if you are keen on the “one vendor” approach, Oracle Linux could be your sole operating system for Oracle products besides the database.

For a long time, the differences between Red Hat Linux and Oracle Linux were negligible. This was before Oracle released its own branch of the kernel—the so-called Unbreakable Enterprise Kernel (UEK).  I use the term kernel-UEK to refer to this kernel.

The UEK causes a bit of a dilemma for the system and database administrator. Should you break compatibility with the most widely used Linux distribution and use kernel-UEK? Or should you maintain compatibility instead? Additionally, it might be difficult to get vendor certification for Oracle Linux 6 and kernel-UEK. Where one could argue that the use of Oracle Linux with the Red-Hat kernel was more or less like using Red Hat, this is no longer true when you switch to kernel-UEK. However, taking Oracle’s aggressive marketing into account, and their ability to set their own timescales for “third party” certification it might be a good idea to go with their favored combination of Oracle Linux 6 plus kernel UEK for future deployments.

image Note  The use of Oracle Linux in this book should not be interpreted as a recommendation for this distribution. The choice of Linux distribution should instead be made based on certification with third-party products, in-house experience, and the quality of support.

Installing Oracle Linux 6

The installation of Oracle Linux 6 is not too different from the previous release. Users and administrators of Oracle Linux 5 will quickly find their way around. Since this book is all about minimizing manual (read: administrator) intervention, it will focus on how to automate the installation of Oracle Linux 6 as much as possible. However it is beneficial to get to know the process of the GUI installation first. Kickstart, the RedHat/Oracle Linux mechanism for lights-out installation is much easier to understand once the steps for graphical installation have been shown.

The following sections assume that the operating system is installed on a BIOS-based server. While writing this book the number of servers using Unified Extensible Firmware Interface (UEFI) as a replacement for the BIOS has steadily increased, but the switch to UEFI-only systems has not yet happened. The steps for installing Oracle Linux 6 on an UEFI server are almost identical. Consult your documentation on how to make full use of the UEFI features. Alternatively, many UEFI systems have a BIOS-compatibility switch which can be enabled.

While the transition to UEFI is still outstanding, an important change has happened in that 32-bit systems are dying out. In fact, although you get support for 32- as well as 64-bit Linux, you should not deploy new 32-bit Linux systems. They simply suffer from too many shortcomings, especially when it comes to memory handling. Oracle realized this as well, and stopped shipping the database for 32-bit platforms.

The Oracle Linux installation is performed in stages. In the first stage the server to be installed is powered on and uses a boot medium-the installation DVD or a minimum boot media to start. Alternatively the PXE boot settings can be used to transfer the minimal operating system image to the server. In the following stage, the installation source as defined will be used to guide you through the installation process.

Manual Installation

Before exploring ways to automate the Oracle Linux installation using Kickstart let’s have a look at the interactive installation of Oracle Linux 6 first. At the time of this writing, Oracle Linux 6 update 4 was the latest version available for download. The Oracle Linux media can be obtained from Oracle’s self-service portal: http://edelivery.oracle.com/linux. Before you can access the software you need to supply login information using an account with a validated email address. You also need to agree to the export restrictions and license as is standard with Oracle products. From the list of available Linux releases to download, choose the current release of Oracle Linux 6 and wait for the download to finish.

This section assumes the host you are installing to has a DVD-ROM drive to be used for the installation. (Of course, that DVD_ROM drive can be virtualized). The DVD will be used as the installation source in this section. In the next section, “Automated installation” further options for installing Oracle Linux over the network are presented.

image Note  The advanced installation methods except PXE-booting are out of scope of this book. More details about ­booting from USB and minimal installation media can be found in Oracle’s online documentation.

After downloading and verifying the integrity of the DVD ISO image, proceed by burning it to an empty DVD. Alternatively, make the ISO image available to the virtualized DVD drive on the server. This DVD contains all the required software to install your server, so the need to juggle CD-ROMs as in previous versions has gone. Ensure that you can either have access to a boot menu in your server’s BIOS or manually set the boot order to start from CD-ROM/DVD-ROM in the first place. Insert the DVD and power the server on. You should be greeted by the boot menu as shown in Figure 5-1:

9781430244288_Fig05-01.jpg

Figure 5-1. The boot menu for Oracle Linux 6

Most users will chose the default option of installing a new system.

image Tip  Even though it is technically possible to upgrade from a previous minor release it is recommended to perform a fresh install instead. Instead of wiping out the existing partitions; however, you could opt for a parallel installation of the new Oracle Linux version. Boot loader entries will allow you to dual-boot the system in case you have to revert back to the earlier release for some reason.

The Oracle Linux installation will begin automatically after a timeout or when you press enter. Before launching the graphical installer “Anaconda,” you have the option to perform a check of the installation media. If this is the first time you use the DVD with a server it is recommended to perform the test. You can safely skip it if you have already successfully used the DVD in a previous installation.

Anaconda

Your system will then start Anaconda, the graphical configuration assistant to guide you through the rest of the installation.

image Tip  If your system does not start the X11 session necessary to display Anaconda, you could use its built-in VNC capabilities. Start a VNC server session on a different machine in the build network and specify the boot parameters vnc vncconnect=vncserverHost:vncPort vncpassword=remoteVNCServerPassword at the command line.

The next steps are almost self-explanatory. After acknowledging the welcome screen by clicking next you have the option to select the language to be used during the installation. Alternative languages can always be installed for use with the system later. Once you are happy with the settings click on the “Next” button to select your keyboard layout to be used during the installation. The keyboard setting-just like the language setting made earlier-can be changed after the system has been installed. From a support point of view it is sensible to limit the installation to English. Mixing multiple languages in different regions makes a global support policy difficult. Translating error messages and troubleshooting can become a problem for non-native speakers.

Choice of storage devices

So far the installation process was very similar to the one used in Oracle Linux 5. The next screen though, titled “Storage devices” is new to Oracle Linux 6. It offers two choices:

  1. To use “basic storage devices”
  2. Alternatively use “specialized storage devices”

Specialized storage in this context allows you to install the operating system to a SAN disk or over Fibre Channel over Ethernet or iSCSI. It also (finally!) offers support for installation on hardware RAID and dm-multipathed devices. For all other needs, simply go with the first option.

image Note  The use of specialized storage devices is out of scope of this book. SAN booting however is a very interesting concept that you could implement to quickly replace failed hardware, mainly in blade enclosures.

In the following sections it is assumed that you are installing the operating system to internally attached disks. Most servers will use an internal RAID adapter to present a single view on internal disks. Consult your vendor’s manual for more information on how to configure these. In many cases, the array driver is already part of the Linux distribution and you don’t need to worry about it once the BIOS setup is completed. If not, then fear not-you have an option to add vendor drivers during the installation process. Whichever way you decide to install the operating system, it is important to have a level of resilience built in. Resilience can come in the form of multiple disks in a RAID 1 configuration, or in the form of multiple paths to the storage to prevent a single point of failure.

Following along the path set out by the example, select the “basic storage devices” click on the “Next” button. You may be prompted with a “storage device warning” before the next screen appears. Oracle Linux tries to read an existing partition table for all detected storage devices made available to the host. The warning is raised whenever no partition table can be found, or alternatively if the detected partition table cannot be read and understood.

If you are certain that the disk for which the warning appears is blank, unpartitioned, and not in use you should click on “Yes, discard my data.” Be careful not to check “Apply my choice to all devices with undetected partitions or filesystems!” You can exclude a disk from the further partitioning process by clicking on “No, keep my data” for that disk.

image Tip  The graphical installer allows you to drop into a shell session at any time. Press CTRL-ALT-F2 for a root shell and CTRL-ALT-F3 to view the Anaconda logs. You could try to read the disk header using dd if=/dev/xxx bs=1M count=1 | od -a to ensure that the disk is not in use. CTRL-ALT-F6 brings you back to Anaconda.

For cluster installations in particular you need to be careful at this stage when adding nodes to the cluster. If your system administrator has made the ASM LUNs available to the host you are about to install they must not be touched by the installer, otherwise you lose data! To prevent human errors, it is usually safer to not present the database LUNs to the host at installation time. Linux is very flexible and SAN storage can be made available to the host without rebooting. Be aware that the storage warning dialog will be repeated for each disk with an unknown partition table or file system.

Surprisingly the next screen allows you to configure the network. The storage configuration will be performed later.

Network configuration

You would have expected to be able to review the storage layout now but first you need to define the hostname of your new server. If you are using DHCP, supply the short hostname only, otherwise provide a fully qualified domain name (FQDN). The fully qualified hostname has been used in Figure 5-2. New to Oracle Linux 6 is the ability to define the network properties at this stage. Click on the “Configure Network” button in the lower-left corner of the screen to access the network configuration settings. The following dialog allows you to configure wired, wireless, mobile broadband, VPN, and DSL connections. Most enterprise systems will need to configure wired connections at this stage. Highlight your current network card by clicking on it, then chose “edit” from the right-hand menu. The resulting configuration options are shown in Figure 5-2.

9781430244288_Fig05-02.jpg

Figure 5-2. Editing system properties for the first ethernet device discovered

This network configuration utility uses Network Manager under the covers. Network Manager is the replacement for the network administration tool (“system-config-network”) and one of the bigger changes in Oracle Linux 6 over its previous versions. This does not imply that the way network settings were configured in Oracle Linux go away, but you should be aware of the new way of doing things. The system settings for the selected device can be chosen in the tabs “wired,” “802.x Security,” “IPv4 Settings,” and “IPv6 Settings.” You have the option to make the following changes on these tabs:

Wired: On this tab you can define the Media Access Control (MAC) address for a specific interface. It is suggested to leave this value blank to use the MAC address of the interface. The MAC address is important when booting-should the network interface card (NIC) return a different MAC address than defined (which can happen during cloning of virtual machines) the network interface will default to a DHCP address and discard its static configuration if used.

802.1x Security: In this tab you can define 802.1.x port-based network access control (PNAC). Normally you would not need to enable 802.1x security-when in doubt leave this option unchecked.

IPv4 Settings: Although the days of the “old” Internet addressing scheme, IPv4 are numbered, it is still the most important way of connecting servers to the local network for the foreseeable future. Therefore the IPv4 protocol plays the most important role in connecting a server within the network. As with the network administration tool you can define the network to be dynamically configured using the dynamic host configuration protocol (DHCP), or manually configured. Most users will probably choose a static configuration by supplying an IP address, netmask, and gateway. Additionally DNS servers can be specified for naming resolution as well as a search domain to be appended in the absence of a FQDN. If necessary you can even set up static routes by clicking the “Routes…” button.

IPv6 Settings: Despite several global IPv6 awareness days there has not been a breakthrough in adoption of the next generation IP protocol inside organizations. Most users therefore will choose to disable IPv6 here (“Method: ignore”).

Time zone settings and root password

Following the network configuration you need to set the system’s time zone. This setting is important as it allows determining the location of the server. If you are planning on clustering the Oracle system yet to be installed then please ensure that all the cluster nodes share the same time zone.

To pick a time zone you can either use the zoom controls to click on the city depicted by a yellow dot on the map nearest to your location to set the time zone. Another option is to use the dropdown list to find the most appropriate time zone.

image Note  The documentation states certain implications when dual-booting your Oracle Linux installation with Windows. A production oracle database server however is not likely to be dual-booted with a non-UNIX operating system.

Before finally making it to the partitioning wizard, you need to set a secure root password to be used with this server. As always, only system administrators should have access to the root account. Click “Next” to proceed with the installation.

Partitioning

An interesting detail of the Oracle Linux installation is the somewhat belated option to choose a partitioning layout. Again this section assumes you have chosen “basic storage devices.” The initial screen offers you a number of options to install Oracle Linux exclusively or in conjunction with other operating systems.

Oracle database servers are not normally configured for dual-booting with non-Linux systems. A planned upgrade of Oracle Linux 5 (or another Linux distribution) to Oracle Linux 6 however is a valid reason for a dual-boot configuration. Before beginning the partitioning of your server be sure to create backups of any data you want to preserve. Please bear in mind that mistakes do happen! After the changes you are making on the following screen are written to disk, previously existing partitions may irrevocably be lost!

Whichever option you select from the list of available options, it is recommended to enable the check box labeled “Review and modify partitioning layout” which takes you to the screen shown in Figure 5-3 to fine-tune the setup for a single-boot configuration with Oracle Linux 6.4 as the only installed operating system.

9781430244288_Fig05-03.jpg

Figure 5-3. Configuring the storage layout for Oracle Linux 6

The partitioning screen has a great many options, and a complete description of all possible configuration combinations cannot be provided in this chapter. It is best to align the storage configuration with your current standards. That way you can’t really make a mistake. For all the other detail please refer to the online manual. To keep things simple and to prevent mistakes from happening it is often recommended not to present storage to the server, which is not immediately required to install the base operating system at this stage. The commands necessary to create storage mount points for the Oracle accounts will be described in a later section in this chapter.

The partitioning screen resembles the layout used in Oracle Linux 5, but the graphical representation in the top of how full an individual disk is will only appear after clicking on it. The user interface has been uncluttered in comparison with the previous version, but the functionality has remained the same. The “Create” button offers you to create the following entities, disk space permitting:

  • Create a standard partition
  • Create a (software) RAID partition
  • Create a (software) RAID device
  • Create a LVM physical volume
  • Create a LVM volume group
  • Create a LVM logical volume

The “Edit” button allows you to make changes to the highlighted partition, the “Delete” button unsurprisingly removes the selected entity. If you got stuck during the partitioning of your system and want to get back to square one, click on the “Reset” button to undo your changes to the original layout.

Oracle Linux has a wider set of options of file systems available to format your disks. Apart from ext3, which was the standard in Oracle Linux 5.x ext2, ext4, xfs, and Btrfs are available.

image Caution  BTRFs is still considered experimental in the upstream kernel. You should probably not use it for production systems yet.

A possible partitioning scheme in Oracle Linux includes the partitions listed in Table 5-1.

Table 5-1. Partitioning layout for the operating system

Mount Point

Size

File System recommendation

/boot

At least 200MiB, up to 2 GiB

Use either an ext2, ext3, or ext4 file system to be on the safe side. The bootloader used in Oracle Linux 6, grub-0.97 has a number of known limitations.

The boot partition, for example, cannot be in a LVM logical volume, but it can be a software RAID device (with limitations). Most hardware RAID controllers should be supported by Oracle Linux 6, but check with your documentation whether you can create /boot on a hardware RAID.

The virtual bootloader in Xen-based virtualization might have problems with ext4, ext2 is tested and confirmed to work.

swap

See below

The swap partition will be covered in detail in the section “Considerations for swap space” below. Although not a requirement it is strongly recommended to create a swap partition.

/

Minimum 5GiB, better more

The smallest, documented, possible root partition of only 3GiB does not allow the installation of all packages required for an Oracle database installation. If possible, opt for 8GiB or more. This allows for staging Oracle installation media and a sufficiently large /tmp directory.

If you would like to use storage replication later you should not install the Oracle binaries on local disk. You should definitely not install them in the / file system. Core dumps can quickly lead to a filled-up root partition resulting in serious trouble for Linux. In-line with the advice not to present storage not necessary for the base OS installation the Oracle LUNs are not presented yet.

Others

For many other open-source software additional mount points are suggested, such as /tmp, /home, and others. For an Oracle database server these are usually not needed.

It is strongly recommended not to install the operating system on a single hard disk. Whenever possible make use of a hardware RAID adapter inside the server. Otherwise the failure of a single disk will result in an outage of the server which can easily be prevented. If your server does not use a built-in RAID adapter for the internal disks you have the option to create a software RAID configuration. An example for a software-RAID is shown in Figure 5-4 below. Matching sized partitions are created on the internal disks and mirrored to form RAID 1 pairs. Of course, you have other software RAID levels as well but RAID 1 seems most practical.

9781430244288_Fig05-04.jpg

Figure 5-4. Using Software RAID to protect the operating system

As you can see the partition layout is symmetrical between the disks: three partitions are created on each device with a partition type of “software RAID” for this small Oracle SE database server. One gigabyte swap in /dev/sda1 and /dev/sdb1 each, 200 MiB boot partition on /dev/sda2 and /dev/sdb2 as well as a root partition spanning the rest of disks /dev/sda and /dev/sdb. In the next step RAID 1 pairs are created, resulting in RAID devices. The boot loader can be installed in the /boot partition. The resulting disk layout is shown here:

[root@server1 ∼]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              6.7G  787M  5.6G  13% /
tmpfs                 497M     0  497M   0% /dev/shm
/dev/md1              194M   39M  146M  21% /boot

An alternative approach would be to create two RAID 1 devices /dev/md0 and /dev/md1. The first one has to be a boot partition since the boot loader cannot reside in an LVM logical volume. The other RAID device can be converted into a physical device for use with LVM.

The Linux Logical Volume Manager (LVM) is an excellent choice to add flexibility to your disk layout. LVM is based on physical volumes, or LUNs. For example, a software RAID device can also be a physical volume. Multiple physical volumes can be aggregated into a volume group, which provides the cumulative amount of free space of all underlying physical volumes. You then carve space out of the volume group, creating logical volumes. On top of a logical volume you create the file system and the mount point. You can be guided through all of these steps in the installer, accessible by the “Create” button.

Now why would you use this complex sounding LVM at all? Because it gives you the ability to grow and even shrink your file systems! If you are using partitions it is very difficult, if not impossible, to resize a file system. With LVM all you need to do is extend the logical volume and resize the file system on top. This can be a life saver if you need to upgrade Oracle and OUI complains about a lack of space.

Considerations for /u01-the Oracle mount point

The Oracle mount points have not yet been covered in this discussion. As with anything in life there is more than one way to do it. One approach would be to keep the Oracle binaries on local storage. This means using volume group “rootvg” or an additional, new volume group on the internal RAIDed disks with multiple logical volumes:

  • Logical volume “swaplv”
  • Logical volume “rootlv” for all Linux binaries, specifically excluding Oracle
  • Logical volume “oraclelv” for all Oracle related binaries

In environments where the Automatic Storage Management option is used you could further divide the “oraclelv” volume into “oragridlv” and “orardbmslv” to enforce the concept of separation of duties. This subject will be covered in more detail later in this chapter.

In addition to the local storage approach just described many sites make use of LUNs exported via a storage area network to persist the Oracle binaries in addition to the database. Using that approach is preferred because it allows for greater independence of the hardware. Stateless computing, which has already been mentioned in Chapter 1, is a great way of improving a system’s mean time to recovery, especially in Real Application Clusters. All that is needed to be done is to assign the failed node’s “personality” (usually an XML file) to a spare node in the chassis and boot it up. Since it looks and feels exactly like the failed node, it can join the cluster in very little time. All of this thanks to intelligent management tools and node-independent storage.

Regardless of whether or not you are planning on using local or SAN storage, you should set plenty of space aside for the Oracle installation. Beginning with Oracle 11.2 the company provided point releases as full releases. The installation process for an 11.1 RAC system begins with the installation of three software homes using the base release 11.1.0.6 software for Clusterware, ASM and the RDBMS home. That is, if you follow the white papers and install ASM into its own software home. It is possible to start the ASM instance out of the RDBMS home as well, but in larger environments every RDBMS patch means an ASM outage on that node too. You next had to patch each of these to the terminal release. In the third step you finally patched all these to the latest Patch Set Update. With the full release you can now skip the middle part and install the full release immediately. However, Oracle strongly discourages in-place upgrades, which drive the space requirements up quite dramatically. Each new point release also seems to be very space-demanding which leads me to the following recommendation:

  • Plan for 20GiB+ for Oracle Restart if you would like to use Oracle Automatic Storage Management
  • Add an additional 15 GiB for each Oracle RDBMS home

Although this sounds like a lot, it is not. Each upgrade of an Oracle home will require you to store at least another home of the same kind. The above is a conservative recommendation-compare to the Oracle 12.1 documentation which recommends that 100GB should be made available for Grid Infrastructure alone!  In addition you can remove the old Oracle home after the successful upgrade and a sufficient grace period. Following the “disk space is cheap” philosophy, do not be tight with disk space for Oracle installation if you can afford it.

Considerations for swap space

Another important consideration for every Oracle installation is the amount of swap space to be provided by the operating system. Discussing virtual memory in Linux easily fills a book on its own; nevertheless I would like to add some information here to allow you to make an informed decision about swap size.

Simply stated, swap allows the operating system to continue running even if all physical memory is exhausted. Technically, swap space is an extension of the virtual memory to hard disk, and can either be a dedicated partition or a file. The use of a dedicated swap partition is recommended over the file-based approach.

For recent Oracle releases, the following formula was used and enforced in Oracle’s Universal Installer:

  • For physical memory less than 2 GiB Oracle wants 1.5x the physical memory as swap space.
  • For systems between 2 and 16 GiB RAM, Oracle recommends a swap partition equal the size of physical memory.
  • For servers with more than 16 GiB RAM Oracle suggests 16 GiB of swap space.

Interestingly, there are multiple recommendations made in the Red Hat documentation set about the swap size! This shows that the amount of swap to be provided is not an exact science. Having elaborated on the use of swap for the operating system there is one remark to be made for Oracle: your database processes should not swap. Strategies to control the virtual memory in Linux are described later in this chapter.

Writing changes to disk

So far in the partitioning screen nothing has happened to the data on disk. As soon as you chose to continue with the installation process by clicking on “Next” in the partitioning screen this changes.

A prominent warning pops up stating that when writing the changes to disk all deleted partitions will indeed be erased, and partitions selected to be formatted will be lost. This is the last warning, and final opportunity to abort the installation process without affecting the data on hard disk.

If you are absolutely sure the settings you have chosen are correct, click on “Write Changes to Disk” to proceed.

Boot loader configuration

Oracle Linux continues to use the GRand Unified Bootloader version 0.9x unlike many consumer distributions, which have switched to GRUB 2. The installation screen does not require too much attention if you are not planning on dual-booting the Oracle database server. As pointed out earlier, the main reason to dual-boot a production database server is when you want to perform an operating system upgrade.

If another Oracle Linux distribution is found and correctly detected it will be added to the boot menu.

Software installation

The Oracle Linux software selection screen is not the most user-friendly interface and occasionally can be difficult to use. The software selection screen is shown in Figure 5-5.

9781430244288_Fig05-05.jpg

Figure 5-5. Selecting packages and repositories for an Oracle Linux installation

For now it is possibly the easiest to select “basic server”, which installs a slim variant of the operating system for use on a server, but without X11. Additional packages can always be added later. Resist the temptation to select “Database Server”-instead of the packages required by Oracle, this will install MySQL and PostgreSQL instead. What you should add however is the support for XFS, even if you are not planning on using it right now. To add XFS support, add an additional repository for the “Scalable Fileystem Support” which adds user land utilities to create and manage XFS file systems.

The package selection made in the screen shown in Figure 5-5 does not allow you to install the Oracle database immediately. Additional packages are necessary. In fact it is easier to install the missing packages for an Oracle installation later from the command line. Upon clicking the “Next” button one more time the installation process starts by copying the selected packages to the server. Notice that Oracle now boots off the Unbreakable Enterprise Kernel version 2 by default!

It is now time for a coffee, this process will take a few minutes to complete. After all packages have been transferred, the Anaconda session congratulates you on your new Oracle Linux installation. After clicking on “Reboot” the server will restart with the settings you created in the installation process. Please ensure the DVD is no longer in the drive or virtually mounted to the host.

Automated installation

So far the installation described has been manual. This causes two problems: it is not reliably repeatable and it requires an administrator to sit through it. Surely the administrator can think of more exciting tasks as well. These disadvantages make the manual approach infeasible, especially when a larger number of servers have to be installed. Luckily there are alternatives to this labor-intensive process.

Taking a step back and providing a simplified view of the installation process results in the following steps:

  1. Loading the boot loader
  2. Fetching the kernel and initial RAM disk
  3. Starting the installation process

The boot loader is normally installed in the master boot record of the operating system “disk,” from where it picks a kernel and initial RAM disk. On a system that is yet to be installed this is obviously not the case and the boot loader will either be on the installation media or provided via the network using the Preboot Execution Environment (PXE). After the kernel has been loaded, control is transferred to the installer from where it is no different than the manual installation.

Additional boot arguments can be passed on the boot loader command line instructing the installer to perform an automated installation. These parameters indicate the location of the Kickstart file and the source of the installation media. Let’s begin by examining how the kernel and initial RAM disk can be obtained via the network.

image Note  The examples shown below have been created with the Security Enhanced Linux (SELinux) subsystem set to “permissive” mode using Oracle Linux 6. If SELinux is set to “enforcing” additional work is necessary to allow the network services (apache/tftpd) to start and serve the files.

Preparing for PXE booting

Booting into the Preboot Execution Environment and then using Kickstart to install the Linux system to match the Unix and Oracle standards is the ultimate form of database server build sophistication. The whole task can be a little bit tricky if a large number of different hardware platforms are used. This is why I suggested standardizing hardware offerings into three different classes matching processing needs in Chapter 1. This allows the engineering team to develop a stable set of installation images.

Before it is possible to spawn new servers or virtual machines using PXE-boot and network installations a little setup work is needed:

  1. The Oracle Linux installation files must be made available over the network.
  2. A TFTP (“trivial FTP”) server must be available to provide a boot environment.
  3. A DHCP (“Dynamic Host Configuration Protocol”) server must be configured.

Normally you would combine the above points on one server. If provisioning of new Oracle Linux instances-physical or virtual-is a critical task, consider setting up multiple installation servers in your network segment. RSYNC or similar utilities could be used to keep the installation servers in sync.

Making the installation tree available

After the TFTP server is installed and ready, it’s time to make the installation sources available. The first task is to provide the installation source. First, you need to decide which way you would like to do this. You have a choice of providing the sources via the FTP, HTTP, NFS, and other protocols. In this example HTTP is used. To export the install source, you need to install a webserver such as apache2 using the yum utility. The example assumes that the yum repository has been configured according to your company’s standard. The installation of the webserver is shown here:

# yum install httpd
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.2.15-15.0.1.el6 will be installed
[...]
--> Finished Dependency Resolution
 
Dependencies Resolved
 
=============================================================================
 Package              Arch          Version                  Repository Size
=============================================================================
Installing:
 httpd                x86_64        2.2.15-15.0.1.el6        cd        808 k
Installing for dependencies:
 apr                  x86_64        1.3.9-3.el6_1.2          cd        123 k
 apr-util             x86_64        1.3.9-3.el6_0.1          cd         87 k
 apr-util-ldap        x86_64        1.3.9-3.el6_0.1          cd         15 k
 httpd-tools          x86_64        2.2.15-15.0.1.el6        cd         69 k
 
Transaction Summary
=============================================================================
Install       5 Package(s)
 
Total download size: 1.1 M
Installed size: 3.5 M
Is this ok [y/N]: y
Downloading Packages:
-----------------------------------------------------------------------------
Total                                        8.1 MB/s | 1.1 MB         00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : apr-1.3.9-3.el6_1.2.x86_64                                1/5
  Installing : apr-util-1.3.9-3.el6_0.1.x86_64                           2/5
  Installing : httpd-tools-2.2.15-15.0.1.el6.x86_64                      3/5
  Installing : apr-util-ldap-1.3.9-3.el6_0.1.x86_64                      4/5
  Installing : httpd-2.2.15-15.0.1.el6.x86_64                            5/5
 
Installed:
  httpd.x86_64 0:2.2.15-15.0.1.el6
 
Dependency Installed:
  apr.x86_64 0:1.3.9-3.el6_1.2
  apr-util.x86_64 0:1.3.9-3.el6_0.1
  apr-util-ldap.x86_64 0:1.3.9-3.el6_0.1
  httpd-tools.x86_64 0:2.2.15-15.0.1.el6
 
Complete!

You should also enable apache to start at boot time using the familiar chkconfig command:

# chkconfig --level 345 httpd on

Do not start apache at this stage-a little more setup work is needed. Begin by mounting the Oracle Linux ISO image to your preferred location; this example assumes /media/ol64. The steps necessary to make the software available to apache are shown here (again assuming Oracle Linux 6.4):

# mkdir /media/ol64
# mount –o loop /m/downloads/linux/V37084-01.iso /media/ol64

Alternatively you could of course copy the contents of the DVD to a directory of your choice. Next apache must be told where to look for the files. Create a file ol64.conf in /etc/httpd/conf.d with the following contents:

Alias /ol64/ "/media/ol64/"
 
<Directory "/media/ol64">
    Options Indexes MultiViews FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from 192.168.100.0/255.255.255.0
</Directory>

This creates an alias for use with apache, and critically allows the installer to follow symbolic links which is important during the installation. The configuration item also restricts access to the installation tree to the build subnet 192.168.100.0/24.

With all this done, start apache using “service httpd start”. When referring to the Oracle Linux install source in a web browser ensure to end the URL with a slash, i.e., http://installServer/ol64/. Otherwise apache will complain that the directory does not exist. Firewalls are another potential source of problems. If you are using firewalls you need to permit access to port 80 on the webserver to access the previously copied files.

Setting up the TFTP server

The TFTP server will contain the initial installation image for Oracle Linux 6. To stay in line with the rest of the book I assumed that your installation server is an Oracle Linux 6 as well. Connect to your installation server and install the tftp-server.x86_64 package using yum as shown here:

# yum install tftp-server.x86_64
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package tftp-server.x86_64 0:0.49-7.el6 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
=============================================================================
 Package            Arch          Version           Repository          Size
=============================================================================
Installing:
 tftp-server        x86_64        0.49-7.el6        ol6_u2_base         39 k
 
Transaction Summary
=============================================================================
Install       1 Package(s)
 
Total download size: 39 k
Installed size: 57 k
Is this ok [y/N]: y
Downloading Packages:
tftp-server-0.49-7.el6.x86_64.rpm                     |  39 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : tftp-server-0.49-7.el6.x86_64                             1/1
 
Installed:
  tftp-server.x86_64 0:0.49-7.el6
 
Complete!

TFTP is not the most secure protocol, and therefore should be adequately protected. It is controlled via xinetd and deactivated by default. To enable and start it, execute the following commands:

# chkconfig --level 345 xinetd on
# chkconfig --level 345 tftp on
# service xinetd start
Starting xinetd:                                           [  OK  ]

The TFTP server supplied with Oracle Linux 6 uses the directory /var/lib/tftpboot to export files. To separate different install images it is a good idea to create a subdirectory per installation source. In the example I am using /var/lib/tftpboot/ol64/. The boot procedure laid out in the following example is based on PXELINUX, a network boot loader similar to SYSLINUX. You may know the latter from installing end-user Linux installations.

Install the syslinux package using YUM. Once the syslinux package has been installed, a few files need to be copied into /var/lib/tftpboot, as shown below:

# cp -iv /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/ol64/
`/usr/share/syslinux/pxelinux.0' -> `/var/lib/tftpboot/ol64/pxelinux.0'
# cp –iv /usr/share/syslinux/menu.c32 /var/lib/tftpboot/ol64/
`/usr/share/syslinux/menu.c32' -> `/var/lib/tftpboot/ol64/menu.c32'

Now the distribution specific files need to be copied from the installation source.

# cp –iv /media/ol64/images/pxeboot/* /var/lib/tftpboot/ol64/
`/media/ol64/images/pxeboot/initrd.img' -> `./initrd.img'
`/media/ol64/images/pxeboot/TRANS.TBL' -> `./TRANS.TBL'
`/media/ol64/images/pxeboot/vmlinuz' -> `./vmlinuz'
#

Nearly there! A boot menu is now required to allow the user to boot (automatically) into the operating system. The PXELINUX system requires a directory called “pxelinux.cfg” to be present, from where it can access boot configuration. The boot menu is created using the below configuration:

# mkdir -p /var/lib/tftpboot/ol64/pxelinux.cfg
 
# cat /var/lib/tftpboot/ol64/pxelinux.cfg/default
timeout 100
default menu.c32
 
menu title ------- Boot Menu -------
label 1
menu label ^ 1) ORACLE LINUX 6 KICKSTART
kernel vmlinuz
append initrd=initrd.img ks=http://imageServer/ks/ol64.cfgksdevice=link
 
label 2
menu label ^ 2) ORACLE LINUX 6 INTERACTIVE INSTALL
kernel vmlinuz
append initrd=initrd.img

The menu defines two items: the first automatically boots after 10 seconds (the timeout is defined in tenths of a second in the configuration file) of inactivity and begins the silent installation of Oracle Linux 6. It achieves this by pointing to the Kickstart file on the webserver just configured and specifying that the first device with an active link should be the Kickstart device. Using ksdevice=link nicely circumvents the problem that the first network card does not necessarily have eth0 assigned to it.

This concludes the TFTP server configuration. Unless you use firewalls again, in which case you need to allow access to port 69, protocol UDP! However it is not yet possible to start the installation-additional information must be passed on to the booting client via DHCP.

Configuring the Dynamic Host Configuration Protocol server

The final step in preparing your build server is to install and configure the DHCP server. A package is available to take care of this task for us, called dhcpd. Install it the usual way using yum:

# yum install dhcp
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package dhcp.x86_64 12:4.1.1-25.P1.el6 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
=============================================================================
 Package       Arch            Version                     Repository   Size
=============================================================================
Installing:
 dhcp          x86_64          12:4.1.1-25.P1.el6          cd          815 k
 
Transaction Summary
=============================================================================
Install       1 Package(s)
 
Total download size: 815 k
Installed size: 1.9 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 12:dhcp-4.1.1-25.P1.el6.x86_64                            1/1
 
Installed:
  dhcp.x86_64 12:4.1.1-25.P1.el6
 
Complete!

This version of DHCP is very recent, which is really good for PXE boot environments. DHCP is the magic that sends the configuration information to our new server over the network. Before it can do so however the configuration file /etc/dhcp/dhcpd.conf needs to be amended. The below is an example minimum configuration file needed for PXE clients:

allow booting;
allow bootp;
# other global options, such as domain-name, domain-name-servers, routers etc
 
# build network
subnet 192.168.100.0 netmask 255.255.255.0 {
  range 192.168.100.50    192.168.100.100;
  # further options...
 
  # specific hosts
  host pxeboot {
    hardware ethernet 08:00:27:62:66:EE;
    fixed-address 192.168.100.51;
  }
  
}
 
# grouping all PXE clients into a class
class "pxeclients" {
  match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
  next-server 192.168.100.2;
  filename "ol64/pxelinux.0";
}

The first two lines instruct DHCP to allow clients to PXE boot, which is the initial requirement followed by other generic configuration options. Next a subnet is defined, which would normally map to the “build” network. The build network is normally out of limits for normal users and specifically designed for software installations such as the one described.

All PXE clients are grouped into their own class “pxeclients.” The vendor-class-identifier is an ISC DHCP version 3 directive that lets us identify clients willing to boot via PXE. The “next-server” directive has to point to the tftp-server. The filename directive finally instructs the clients to look for the file ol64/pxelinux.0 which was copied during the TFTP configuration and set up earlier.

image Note  You can take this even further by using dynamic DNS updates (“ddns”). DDNS updates in combination with the host { } directive in DHCP allows for true lights out provisioning of servers. You can also use frameworks such as cobbler or spacewalk.

Before the first server can be started, a Kickstart file is required. Remember to add firewall rules for your DHCP server if needed.

Considerations for the Kickstart file

After all the preparations it is time to define how the system is to be installed. Any reputable Linux distribution comes with a lights-out installation mechanism, and Oracle Linux is no exception to that rule. Just like Red Hat Linux, Oracle Linux uses the Kickstart mechanism to perform automated installations. The Kickstart format can be quite daunting at first, and it occupies a lot of space in the installation manual.

It is not as difficult as one might think, mainly because of the following factors:

  • Every time you manually install Oracle Linux the installer (called “Anaconda”) creates a Kickstart file with the options you chose in /root/anaconda-ks.cfg.
  • A GUI tool, system-config-kickstart is available which guides you through the creation of a Kickstart file.

To automate the installation of Oracle Linux it is usually easiest to review the Anaconda-generated Kickstart file and make subtle changes as necessary. Consider the following Kickstart file which you could use as the basis for your own installation. It has been created as a result of the manual installation shown earlier in the chapter.

install
url --urlhttp://192.168.100.2/ol64/
lang en_US.UTF-8
keyboard uk
network --device eth0 --bootproto dhcp --noipv6
rootpw –iscrypted a...o
firewall --disabled
selinux --permissive
authconfig --enableshadow --passalgo=sha512
timezone --utc America/New_York
bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
ignoredisk --only-use=sda
clearpart --initlabel --drives=sda
part /boot --fstype=ext2 --size=200 --ondisk=sda
part pv.008002 --grow --size=200 --ondisk=sda
volgroup rootvg --pesize=4096 pv.008002
logvol swap --name=swaplv --vgname=rootvg --size=2048
logvol / --fstype=ext4 --name=rootlv --vgname=rootvg --size=5940 --grow
 
%packages
@base
@client-mgmt-tools
@console-internet
@core
@debugging
@directory-client
@hardware-monitoring
@java-platform
@large-systems
@network-file-system-client
@performance
@perl-runtime
@server-platform
@server-policy
@system-admin-tools
pax
python-dmidecode
oddjob
sgpio
certmonger
screen
strace
pam_krb5
krb5-workstation
%end

In the above example the Anaconda installer is going to install (instead of upgrade) the software using the HTTP protocol pulling the files from our installation server. Language and keyboard are defined next before the network definition is set to use DHCP for the first Ethernet device. Additional networks can be defined in their own “network --device ethx” section.

The root password used is encrypted; the value is taken from the anaconda-ks.cfg file created during the manual installation. The clear text password has been entered during the initial manual installation performed earlier.

image Tip  Encrypted passwords can be taken from an existing /etc/shadow file.

Specifying an encrypted password is much safer than a clear text password!

The following directives disable the firewall and set the security-enhanced Linux settings to permissive. Be sure to update this section after consulting your IT-security department! The authconfig command enables the use of shadow passwords which implies local user accounts on the machine. Furthermore the shadow passwords should use SHA512 as the password algorithm.

The timezone is then set to America/New_York-this should most definitely be adopted to match the closest location where the server is located. The next few lines are dealing with the boot loader location and the partitioning scheme. The chosen partitioning options can be translated into English as follows:

  • clearpart: Clear all partitions and initialize the disk label
  • part /boot: Create a boot partition of 200 MiB using ext4
  • part pv008002: Create a physical volume and use all remaining space
  • volgroup rootvg: Create a volume group “rootvg” using the physical volume just created
  • logvol (twice): Create two logical volumes on volume group rootvg.

The kickstart file explicitly uses disk sda only. The remainder of the file specifies which YUM packages and groups should be installed. If you wonder which groups are available, you can use YUM on a system to execute the following query:

# yum grouplist

Although it looks intimidating at first-especially the partitioning part-the Kickstart process is logical and easy to master. The Kickstart format is also very well documented in the installation guide. Please refer to it for further information about available options. Once the Kickstart file is ready, move it to a directory on the webserver from where it is accessible to the build network. For security reasons it should not be exposed outside of the build network. To follow the example you need to place it into /var/www/html/ks/.

A final word of warning: please be careful with the file. It absolutely erases everything! As with anything in this book, please adapt the file to your needs and test it thoroughly before deploying it on real systems.

Testing the automated installation

With all the preparation work completed, it is time to put the procedure to test. Connect a machine to the build network and enable PXE booting on the hardware level. Most often you would connect to the lights-out management console before starting the server. After a little while, when the subsystems have initialized, the network card should advertise that it is trying to PXE boot. This triggers a DHCP request on your install server, which should be acknowledged. If you like (still on the install server) start tailing the /var/log/messages file to see what happens. If you are really curious you could start tcpdump on your install server to listen on port 69 to see if the tftp request can be satisfied.

If everything goes well then you will see a (very basic) boot menu which will automatically boot the default operating system. If you are curious you will see the whole installation happening on the lights-out management console. You should do this at least once to ensure that there are no problems with the process. Also bear in mind that you need to be careful where you install the operating system: there is no safety net: all the data on the machine is erased, and you get a fresh Oracle Linux server instead. If you used DDNS you should then have a server, which is correctly configured within DNS and ready for the next steps, the preparation of the OS to install Oracle Database 12.1.

Preparing for the Oracle Database installation

With the operating system in a bootable state, it is time to configure the environment to allow for the installation of a database. This process has not changed from previous releases, and it involves changing kernel parameters, installing additional packages, and creating users and groups. The section assumes a fresh installation of Oracle Linux without any existing Oracle-related accounts in place.

The use of Oracle Automatic Storage Management (ASM) requires an installation of Grid Infrastructure for a standalone server. In this respect there is nothing new to the process. Using ASM even for single instance deployments is worth considering. ASM offers benefits over certain file systems, mainly when it comes to concurrent writes, inode locking, direct IO capabilities, and many more. On the downside, using ASM for storing database files moves these out of the file system and into an Oracle-specific storage area. Although ASM has been enhanced with command-line like access, you still need to connect to the ASM instance implicitly.

Installing additional packages

The first step in the preparation for the Oracle installation involves completing the installation of the required set of packages. A local YUM repository such as the one created earlier can be used to download the packages and resolve any dependencies. To make the repository available to the system create a new file local.repo in /etc/yum.repos.d/ with these lines:

[local]
name = local installation tree
baseurl =http://imageServer/ol64/
enabled = 1
gpgcheck = 1

Note that the gpgcheck value is set to 1 in the repository configuration. Whenever you are using repositories you must ensure that the packages you download from the repository are signed and match the key! Using the above repository you should be able to install the packages required for the next steps.

# yum install compat-libcap1 compat-libstdc++-33 libstdc++-devel gcc-c++ ksh libaio-devel
# yum install xorg-x11-utils xorg-x11-server-utils twm tigervnc-server xterm

The preceding list of commands depends on the packages already present on your system. For a complete list of what is needed, please refer to the Oracle Database Quick Installation Guide 12c Release 1 for Linux x86-64. See specifically section 6.1, “Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64”, for a complete list.

If you are planning the installation of Oracle Restart, you also need to install the package “cvuqdisk,” which is part of the Grid Infrastructure installation package. Installing these packages satisfies the Oracle installer.

Creating the operating system users and groups

With all required packages in place it is time to consider the creation of the Oracle user and groups. Whereas this was a simple and straightforward task in releases up to Oracle 11.1, some more thought must now be put into the process if you plan on using ASM or Real Application Clusters. The new process in Oracle 12c even affects the database installation.

The main reason the user and group creation process has become more interesting is the “separation of duties”. In the very basic form, one operating system account-usually named “oracle”-owns the binaries for Grid Infrastructure and the database as well. This way whoever logs into the system as oracle has full control over every aspect of database management. While this is acceptable in most smaller companies, larger institutions use different teams for the management of the Oracle stack. In versions leading up to 12.1, the separation in broad terms was between the storage team and DBA team: if so desired, separate accounts were used to install and own Grid Infrastructure and the RDBMS binaries. In addition to the storage aspect, Oracle 12.1 introduced new responsibilities: backup, Data Guard, and encryption key management.

Similar to previous releases, the responsibilities are implemented using internal groups such as OSDBA to which operating system groups are mapped. Operating system accounts can then further be mapped to the groups, inheriting the privileges associated with the role. The mapping between Oracle groups and operating system groups can be found in Table 5-2.

Table 5-2. Internal groups, operating system groups, and users

Internal Group

Description

Typical operating system group

OSDBA (RDBMS)

Members of the OSDBA group for the database are granted the SYSDBA privilege. The user can log in using the “/ as sysdba” command on the server and has full control over the database.

dba

OSOPER (optional)

This is an optional privilege. Members of the OSOPER group for the database are allowed to connect to the system as SYSOPER. The SYSOPER role has been used in the past to allow operators to perform certain tasks such as instance management (starting/stopping) and backup-related work without the ability to look at user data. The role is probably superseded by the ones shown below.

oper

OSBACKUPDBA

Allows members to connect using the new SYSBACKUP privilege. The new group has been created to allow non-database administrators to perform backup-related tasks.

backupdba

OSDGDBA

The new SYSDG privilege available to members of this group allows them to perform Data Guard–related tasks.

dgdba

OSKMDBA

This new group is used for users dealing with encryption key management such as for Transparent Data Encryption (TDE) and Oracle wallet.

kmdba

OSASM

Members of the OSASM group are given the SYSASM privilege, which has taken over from SYSDBA as the most elevated privilege in ASM. This is quite often assigned to the owner of the Grid Infrastructure installation.

asmadmin

OSDBA for ASM

Members of the OSDBA have read and write access to files within ASM. If you are opting for a separate owner of Grid Infrastructure, then the binary owner must be part of this group. The owner of the RDBMS binaries must also be included.

asmdba

OSOPER for ASM  (optional)

Similar in nature to the OSOPER group for the database, the members of this optional group have the rights to perform a limited set of maintenance commands for the ASM instance. Members of this group have the SYSOPER role granted.

asmoper

Without a policy of separation of duties in place you could map the oracle user to all the above-mentioned groups. In a scenario where storage and database management are separated you could map the ASM-related groups to the grid user, and the rest to oracle. The oracle account also needs the OSDBA for ASM privilege to connect to the ASM instance; without it oracle can’t access its storage. Even if you are not planning on using multiple operating system accounts I still recommend creating the operating system groups. This is simply to give you greater flexibility later on, should you decide to allow accounts other than oracle and grid to perform administration tasks with the database.

Up until now, one other very important group has not been mentioned: oinstall. This group owns the Oracle inventory and is required for each account that needs to modify the binaries on the system. Oracle recommends that every Oracle-related operating system account should have oinstall as its primary group.

Scenario 1: one operating system account for all binaries

This is the simplest case-the oracle account will be a member of all the operating system groups mentioned above. To facilitate such a setup, you need to create the operating system groups as shown in the following example. If you are setting your system up for clustering, then the numerical user-IDs and group-IDs need to be consistent across the cluster!

To ensure consistent installations, the numeric user and group-IDs should be part of the standards document covering your build, and the users should ideally be pre-created. For even more consistency you should consider the use of configuration management tools. For a manual installation, you would follow these steps, beginning with the mandatory groups.

image Note  In the following examples a hash or “#” indicates commands to be executed as root; a dollar sign denotes a non-root shell.

# groupadd –g 4200 oinstall
# groupadd –g 4201 dba

Those are the groups you need at least; if you like greater flexibility later on you could also define the other groups mentioned in the above table. Again, it is recommended to use the numeric IDs. Please ensure that the group IDs chosen match those defined in your build standards-the ones shown here are for demonstration only.

# groupadd –g 4202 backupdba
# groupadd –g 4203 dgdba
# groupadd –g 4204 kmdba
# groupadd –g 4205 asmdba
# groupadd –g 4206 asmadmin

You could also create the “oper” groups for the accounts but they are optional since 11.2. With the groups defined you can create the oracle account as follows:

# useradd –u 4200 –g oinstall –G dba,asmdba –m oracle
# passwd oracle
Changing password for user oracle
New password:
Retype new password:
passwd: all authentication tokens updated successfully

If you opted for the creation of the supplementary groups, you could add those to the oracle account:

# usermod –G dba,backupdba,dgdba,kmdba,asmdba,asmadmin oracle

To check how your oracle account is set up, you can use the id command as shown here for the minimum required groups:

# id –a oracle
uid=4200(oracle) gid=4200(oinstall) groups=4200(oinstall),4201(dba)

Notice that the oracle account must have the new groups assigned to it or they will not be selectable in the OUI session later. Once you are happy with your setup, proceed to the section “Configuring Kernel Parameters.”

Scenario 2: separation of duties

If you are planning on installing a Real Application Cluster or want to use ASM, which requires an installation of Oracle Restart, you could separate the storage administration from the database administration. The most common scenario is to create two operating system accounts, oracle and grid. The main reason against such a setup in the past was the problematic support for patching in early Oracle 11.2 releases. These problems have largely been solved, and there are no problems expected with different owners for Grid Infrastructure and the database.

Assuming the above-mentioned groups have already been created, you need to set up the grid owner as well as the oracle owner. Consider the following example for the oracle account:

# useradd –u 4200 –g oinstall –G asmdba,dba –m oracle
# passwd oracle
Changing password for user oracle
New password:
Retype new password:
passwd: all authentication tokens updated successfully

Conversely, the grid account could be created as follows:

# useradd –u 4201 –g oinstall –G asmadmin,asmdba,dba –m grid
# passwd grid
Changing password for user grid
New password:
Retype new password:
passwd: all authentication tokens updated successfully

image Note  For some strange reason Oracle requires the grid user to be member of the DBA group-failing that you won’t be able to install the database software. Optionally, add the oracle user to the kmdba, backupdba and dgdba groups as well.

That concludes the setup of these accounts. If you like you can assign the remaining additional groups to the oracle account before proceeding to the next section to allow for even finer granularity of access.

Checking kernel parameters

The Linux kernel has lots of tunables to affect the way it operates. The Oracle database makes intensive use of these, and you need to modify the standard parameters before you can install the database or Grid Infrastructure software. Many kernel parameters can be changed at runtime by echoing values to files in the /proc file system. To make these changes permanent, you need to modify the /etc/sysctl.conf file which is parsed at every system boot.

Oracle made it somewhat easier by adding an option to the OUI interface allowing you to run a fixup script to correct these values to their minimum required settings. If you are using Oracle Linux 5, you could alternatively install the oracle-validated RPM which helps setting some of the parameters before the installation. There is a similar RPM available for Oracle 12.1, named Oracle RDBMS Server 12cR1 Pre-Install RPM. You should also consult with your system administration team to adjust the values to fit your hardware optimally.

Tables 5-3 and 5-4 list the parameters and provide advice on setting them. Table 5-3 focuses upon semaphore parameters. Table 5-4 lists all the others. It makes sense to check the parameters even after having installed the preinstall RPM!

Table 5-3. Kernel Parameters relating to semaphores

Kernel parameter

Recommended (minimum) value

Description

semmsl

250

The maximum number of semaphores in a semaphore set. Applications always request semaphores in sets. The number of sets available is defined by the semmni value, see below. Each of these sets contains semmsl semaphores.

semmns

32000

The total number of semaphores permitted system-wide. The value of 32000 = semmsl * semmni.

semopm

100

Sets a limit for the maximum number of operations in a single semaphore-related operating system call.

semmni

128

The maximum number of semaphore sets.

Table 5-4. Other kernal parameters and their recommended minimums

Kernel parameter

Recommended (minimum) value

Description

shmmax

Half the physical memory; set to 4398046511104 byte by Oracle preinstall

Shared Memory Max (size) defines the maximum size of an individual shared memory segment in bytes. When you start an Oracle instance, it tries to allocate the SGA from shared memory. If the total size of the SGA is greater than shmmax then Oracle will create the SGA consisting of multiple smaller segments, which can have implications on NUMA-enabled systems since memory might not be node-local.

The Oracle validated RPM uses the maximum size permissible on the 64bit platform: 4TB. That should be enough to fit even the largest SGA!

shmmni

4096

This parameter sets maximum number of shared memory segments permissible. This value comes to play in two different ways: first of all if you set shmmax to a small value and Oracle has to break down the SGA into smaller pieces. Secondly, each time an Oracle instance-ASM and RDBMS-is started, the available number is decremented by one.

The value of 4096 recommended in the Oracle installation guides guarantees that you will not run out of shared memory segments.

shmall

1073741824

This parameter determines the system-wide limit on the total number of pages of shared memory. It should be set to shmmax/`getconf PAGE_SIZE.

file-max

6815744

This parameter allows you to set a maximum number of open files for all processes, system-wide. The default should be sufficient for most systems.

ip_local_port_range

9000 65500

The local port range to be used for Oracle (dedicated) server processes should be configured to 9000 to prevent a clash with non-oracle operating system services using the port numbers as defined in /etc/services.

By lowering the boundary to 9000 you should have enough ephemeral ports for your expected workload.

rmem_default

262144

This sets the default receive buffer size (in bytes) for all types of connections-TCP and UDP.

rmem_max

4194304

This sets the maximum receive buffer size (in bytes) for all connections-TCP and UDP.

wmem_default

262144

This sets the default send buffer size (in bytes) for all types of connections-TCP and UDP.

wmem_max

1048576

This sets the max OS send buffer size for all types of connections.

aio-max-nr

1048576

This parameter is related to asynchronous I/O model in Linux as defined by libaio and should be set to 1048576 to prevent processes receiving errors when allocating internal AIO-related memory structures.

The values in Table 5-3 are all listed in /proc/sys/kernel/sem and are responsible for controlling the semaphores in a Linux system. Oracle uses shared memory and semaphores extensively for inter-process communication. In simple terms, semaphores are mainly required by processes to attach to the SGA and to control serialization. The values shown in Table 5-3 should be sufficient for most workloads.

Using the values from Tables 5-3 and 5-4, the resulting /etc/sysctl.conf file contains these lines:

 kernel.shmall = 1073741824
 kernel.shmmax = 4398046511104
 kernel.shmmni = 4096
 kernel.sem = 250 32000 100 128
 fs.file-max = 6815744
 net.core.rmem_default = 262144
 net.core.wmem_default = 262144
 net.core.rmem_max = 4194304
 net.core.wmem_max = 1048576
 fs.aio-max-nr = 1048576
 net.ipv4.ip_local_port_range = 9000 65500

You do not need to worry about these though. The Oracle Universal Installer provides a “fixup” option during the installation which can modify the kernel parameters to set the required minimum values. In addition, the Oracle server preinstall RPM ensures that you meet these minimum requirements. You can read more about the preinstall-RPM later in this chapter.

The Oracle mount points

The mount points and file systems suggested for use with the Oracle binaries have been discussed in the section “Considerations for the Oracle mount point”. Assuming that the storage has been formatted during the installation, the remaining work is simple: just update the file system table in /etc/fstab with a mount point for the Oracle software and you are done. Most often, the Oracle binaries are installed following the Optimal Flexible Architecture-OFA.

The starting point for mounting the file systems for use with Oracle usually is the /u01 directory and the hierarchy beneath. Alternatively you could use the top-level directory defined in your standards document. The next few paragraphs follow the OFA recommendation.

First, you need to consider the location of the Oracle inventory. The inventory is often owned by the oracle account but more importantly is owned by the operating system group oinstall. This ensures that in the case of separation of duties other operating system accounts have write permissions to the global inventory location. It is also the reason it is so important to define oinstall as the primary group for the oracle and grid user. In many deployments you will find the Oracle inventory in /u01/app/oraInventory, which is also the default.

Before discussing the installation location of Grid Infrastructure it is important to cover the ORACLE_BASE. For the Grid Infrastructure installation, the Oracle base signifies the location where certain diagnostic and other important log files are stored. From a database perspective, the most important subdirectory is the diagnostic destination which has been introduced with Oracle 11.

An important restriction exists for the installation of the Grid Infrastructure for a cluster: the GRID_HOME must not be in the path of any ORACLE_BASE on your system.

The default location for Grid Infrastructure in a cluster configuration is /u01/app/12.1.0/grid, but in the author’s opinion you should use one more digit to indicate the version number, i.e. /u01/app/12.1.0.1/grid instead. This will make it easier during patching to identify a software home. Remember that Oracle introduced full releases with the first patch set to 11g Release 2, which are out-of-place upgrades.

To sum it up the following directories are needed for an OFA-compliant installation. You can use these as mount points for the logical volumes defined earlier during the installation:

# mkdir -p /u01/app/oraInventory    # Path for the inventory
# mkdir –p /u01/app/oracle          # ORACLE_BASE for the database owner
# mkdir –p /u01/app/grid            # ORACLE_BASE for the grid owner

By defining directories with this structure it is ensured that the Oracle Universal Installer (OUI) will pick up the OFA-compliant setup. Assuming a separate oracle and grid user you would continue by setting the permissions as follows:

# chown grid:oinstall /u01/app/oraInventory/
# chmod 775 /u01/app/oraInventory/
 
# chown grid:oinstall /u01/app/
# chmod -R 775 /u01/app/
 
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle

If you are planning on installing the database software only, there is no need for a grid user.

Setting session limits

The Pluggable Authentication Modules or PAM are a flexible way to authenticate users in a Unix environment. PAM itself is “only” a framework-its functionality is implemented via modules. Such modules exist for many authentication methods. When a user requests access to a Linux machine, the login process plays a crucial role in the process. Using the services provided by the PAM library, it assesses that the user requesting a service (bash/ksh or other shell) is actually who he claims to be. To that end, a multitude of authentication methods ranging from password to LDAP can be employed, which are beyond the scope of this chapter. From an Oracle point of view, one property of the login daemon is very important: it can assign limits to an individual user session.

The Oracle installation guide mandates that the following requirements are met:

  • The maximum number of open file descriptors must have a soft limit of 1024 and a hard limit of 65536
  • The number of processes a user can create must be at least 2047 with a hard limit of 16384
  • A stack size (per process) of at least 10240KiB and at maximum of 32768 KiB

These settings are made in the /etc/security/limits.conf file, which unfortunately is still not documented properly in the installation guides. In this file, every line follows this format (see man 5 limits.conf):

domain    type    item    value.

The relevant domain for the Oracle database installation is a username. The type can be either hard or soft, and the item denotes which attribute is to be changed. To be more precise, we need at least amend the items “nofile,” “nproc,” and “stack.” On the x86-64 platform, the values set by the oracle-rdbms-server-12cR1-preinstall RPM were defined as follows:

[root@server1 ∼]# cat /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf 
>| grep ^oracle
oracle   soft   nofile    1024
oracle   hard   nofile   65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

You will notice a small variation to the requirements postulated by oracle: the value for the number of processes exceeds the recommendation by Oracle. The settings need to be repeated if you are planning on using the grid user to own the Grid Infrastructure installation:

grid   soft   nofile    1024
grid   hard   nofile   65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768

The Oracle Database 11g Release 2 preinstall RPM installed its settings into limits.conf directly. The new version installs its settings into /etc/security/limits.d instead. The next time the oracle or grid user logs in the settings will take effect. The pam_limits.so module is automatically included in the login process via the system-auth module. This way, after traversing the requirements for an interactive shell login, the limits will be set for the oracle and grid user. You can check the limits by using ulimit command. More detailed information about these settings can be found in the man-page “pam_limits” (8).

Configuring large pages

Large pages, also known as “huge pages”, are a feature introduced to Linux with the advent of the 2.6 kernel. The use of large pages addresses problems Linux systems can experience managing processes on systems with large amounts of memory. Large amounts of memory at the time when large pages where introduced began with about 16GB of RAM which is not considered large any more. To explain, a little background information about memory management is required.

The Linux kernel on the Intel IA32 architecture (that is not Itanium!) uses a default memory page size of 4kb. All of the physical memory has to be managed by the kernel in tiny chunks of 4kb. Huge pages on the other hand use much larger page sizes of 2MB. It becomes immediately obvious that the kernel benefits from this as there are less memory pages to manage. But it is not only the reduced number of memory pages the kernel needs to keep track of; there is also a higher probability that the part of the CPU responsible for the translation of physical to virtual memory (the translation lookaside buffer, or TLB) will have a page address cached, resulting in faster access.

In earlier Oracle Releases you needed to manually calculate the number of huge pages before starting the Oracle instance. Oracle provides a script as part of a My Oracle Support note, called calc_hugePages.sh. Using it against a started Oracle instance it calculated the number of large pages. Thankfully, you get this information in the Oracle database instance’s alert.log now as shown here:

****************** Large Pages Information *****************
 
Total System Global Area in large pages = 0 KB (0%)
 
Large pages used by this instance: 0 (0 KB)
Large pages unused system wide = 0 (0 KB)
Large pages configured system wide = 0 (0 KB)
Large page size = 2048 KB
 
RECOMMENDATION:
  Total System Global Area size is 2514 MB. For optimal performance,
  prior to the next instance restart:
  1. Increase the number of unused large pages by
 at least 1257 (page size 2048 KB, total size 2514 MB) system wide to
  get 100% of the System Global Area allocated with large pages
***********************************************************

With the information that another 1257 large pages (of 2048KB) are required you can modify the /etc/systctl.conf file to ensure these are set aside when the system boots. The value to be entered is the sum of all the huge pages needed for all SGAs on the host plus a few extra ones for safety. Only one database will be used on the server in the following example:

vm.nr_hugepages = 1400

Memory permitting, you could try to reserve them without reboot: simply echo “1400” into /proc/sys/vm/nr_hugepages. If your memory is too fragmented this might not work and you have to reboot. The use of huge pages is shown in the /proc/meminfo file. To check if the requested number of huge pages is available, you could grep for HugePages:

[root@server1 ∼]# grep HugePages /proc/meminfo
HugePages_Total:     1400
HugePages_Free:      1400
HugePages_Rsvd:        0
HugePages_Surp:        0
[root@server1 ∼]#

The change to /etc/sysctl.conf alone will not allow the oracle (and grid) user to use large pages. Large pages require the memory to be “locked,” and large pages cannot be paged out. Since the 12c preinstall RPM does not set the necessary parameter in /etc/security/limit*, you need to do so yourself. Using your favorite text editor, modify /etc/security/limits.conf or its equivalent, and add configuration parameters similar to these:

oracle   soft    memlock   60397977
oracle   hard    memlock   60397977
 
grid     soft    memlock   60397977
grid     hard    memlock   60397977

The value to be set is in kilobytes. You could simply take the amount of RAM in your server minus 10 percent and set this in the file. Remember that this is not the allocation; it only defines how much memory the process may lock in theory.

A new session is required for these settings to take effect. If you want to ensure that your database enforces the use of large pages, set “use_large_pages” to “only”. Note this is incompatible with Automatic Memory Management! If you have memory_target set in your initialization file, you will get the following error:

******************************************************************
Large pages are not compatible with specified SGA memory parameters
use_large_pages = "ONLY" cannot be used with memory_target,
memory_max_target, _db_block_cache_protect or
use_indirect_data_buffers parameters
Large pages are compatible with sga_target and shared_pool_size
******************************************************************

If permissions are set correctly, namely the memlock item in /etc/security/limits.conf, and if enough large pages are available you will see a success message in the alert.log:

****************** Large Pages Information *****************
Parameter use_large_pages = only
Per process system memlock (soft) limit = 58 GB
 
Total System Global Area in large pages = 2514 MB (100%)
 
Large pages used by this instance: 1257 (2514 MB)
Large pages unused system wide = 143 (286 MB)
Large pages configured system wide = 1400 (2800 MB)
Large page size = 2048 KB
***********************************************************

It is possible that only a part of the SGA uses large pages, which is a scenario to be avoided. Setting the initialization parameter “use_large_pages” to “only” as shown in the preceding example ensures that the SGA in its entirety uses large pages. Please do not over-allocate large pages! Large pages affect only the SGA; private memory structures such as the PGA and UGA do not benefit from them. You need to leave enough memory available on the system for user sessions, or otherwise risk problems.

Introducing the oracle-rdbms-server preinstall package

Shortly after Oracle announced support for 11.2.0.3 on Oracle Linux 6 the company also released an updated RPM to simplify the installation of the software for RAC and single instance. Similar to the well-known “oracle-validated” RPM used with Oracle Linux 5, this new package performs many necessary pre-installation steps needed for Oracle database 12c (a separate RPM is available for 11g Release 2). Among those performed is the creation of the oracle account, modifying kernel and session related parameters. The package also modifies the kernel boot loader configuration file. It is a great help when getting started with Oracle, but it is not foolproof. As with any piece of software understanding the changes it makes to the operating system is crucial in configuring a robust system.

The preinstall RPMs are tied to a specific database version at the time of this writing, namely 11.2 and 12.1. During testing it made no difference to the installation of the 12c database and Grid Infrastructure packages whether the 11g Release 2 or 12c Release 1 RPM were installed. This is because the system requirements are very similar for both.

Before adding any of these packages to the default build, ensure that the package matches the build standard. This is especially important with regards to the user and group IDs. If needed, you can always get the source RPM (“SRPM”) file and modify the settings in it. In addition, none of the preinstall RPMs create a grid user for a separation of duties, nor do they install the new operating system groups introduced in the earler section “Creating the operating system users and groups”.

Configuring storage

In addition to the storage required to host the Oracle binaries, additional space is required for the actual Oracle database. A number of choices exist for the underlying file system, including ASM. For each of them, the starting point will be the logical unit number or LUN(s) to be presented to the host by the storage administrators. Most systems will use multiple paths to the storage for performance and resilience. All major storage vendors have their own proprietary multipathing software-EMC’s PowerPath, Hitachi Dynamic Link Manager, and too many more to list here.

In addition to these proprietary drivers, Linux comes has its own generic multipathing package, called dm-multipath. In the following section you will see how that is used in the context of an Oracle database installation.

Partitioning LUNs

Before beginning the multipathing configuration which is covered in the next section it is beneficial to partition the devices at this stage. This prevents unloading and messing with the devices once the multipath configuration is in place. Two utilities exist for partitioning: parted and fdisk. This section focusses on the fdisk utility.

image Note  You need parted if you need to create LUNs with a size > 2 TB. Such devices cannot be addressed by the ­Master Boot Record (MBR) format, but rather need a GUID partition (GPT) table. This is a limitation of the addressing scheme of the hard disk, not of the operating system.

Once the LUN you are interested in is discovered on the operating system you can partition it. Most often you will get worldwide IDs from the storage team to use on the new database server. Using the /dev/disk/by-id directory it is possible to identify the SAN storage based on these WWIDs and to partition it. Pass the complete path of the disk as an argument to fdisk, as shown in this example:

# fdisk /dev/disk/by-id/scsi-1IET_00010001

Following this, you need to create a partition spanning the whole disk. The steps are shown in the below listing, where the following operations are carried out:

  • The LUN is selected
  • The display unit it changed to sectors
  • A new primary partition is created with an offset. Different vendors require different offsets-check with your storage administrator on how to define the partition to match the requirements stated in the documentation. Also, many advanced formatting devices such as flash based storage are using a 4096 byte sector size instead of the previous 512 byte sectors. For optimal performance, partitions have to be aligned at 4k boundaries on these types of devices.

image Note  As you will have noticed the examples presume an iSCSI LUN. Storage provided via Fibre Channel uses a different notation for the WWID, but other than that the steps are identical.

Following is an example. The text in bold are the responses that I typed in while generating the example.

# fdisk /dev/disk/by-id/scsi-1IET_00010001
 
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').
 
Command (m for help): u
Changing display/entry units to sectors
 
Command (m for help): c
DOS Compatibility flag is not set
 
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-4196351, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4196351, default 4196351):
Using default value 4196351
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

This example created a primary partition on a SCSI device. With such a device partitioned, you can start configuring the multipath software.

Configuring dm-multipath

The principle behind any multipathing software is simple. Without the abstraction layer, the operating “sees” each block device once via each path. That means a LUN can be /dev/sdb and additionally /dev/sdk-but still be the same physical device, just using two different paths to the storage. Without a multipath driver the operating system could not easily assess that two devices are logically the same. Thanks to the driver however this becomes possible. Instead of using the native devices a new device is introduced to which the application (read: Oracle) sends I/O requests. The pseudo device is created in a number of different places:

  • In the /dev directory such as /dev/dm-35
  • Again in the /dev directory, but with /dev/mpathn or a WWID
  • In the /dev/mapper directory with a user defined alias

In the case of the dm-multipath package, the mapping between block device and pseudo-device is performed in the main configuration file: /etc/multipath.conf. An important aspect is to only use the pseudo-device. Otherwise there would be no protection from path failures or no performance gains!

The installation of Oracle Linux comes with the multipath package installed as part of the standard installation. If the package is not yet available, you should install the package device-mapper-multipath.x86_64 including all its dependencies. When installed, you need to ensure that the multipath process is started at every system boot-this is done via the chkconfig application as shown here:

# chkconfig --level 35 multipathd on
# chkconfig --list | grep -i multipath
multipathd      0:off   1:off   2:off   3:on    4:off   5:on    6:off

Unlike Oracle Linux 5, there is no example content in /etc/multipath.conf-the file does not exist. A number of example configurations are available in the documentation directory. That directory is named as follows, but replacing version with your own version number:

 /usr/share/doc/device-mapper-multipath-version.

The quickest way to starting with a basic failover scenario is to use the recommendation from the online documentation by using the mpathconf utility as shown here:

# mpathconf --enable --with_multipathd y
# echo $?
0

The command creates a basic configuration file: /etc/multipath.conf and additionally loads the kernel modules necessary for the correct operation of the package. Querying the mpathconf command you are shown the successful execution:

# mpathconf
multipath is enabled
find_multipaths is disabled
user_friendly_names is enabled
dm_multipath module is loaded
multipathd is chkconfiged on

The next step is to review the newly created multipath.conf file, which is very similar to the format previously used in Oracle Linux 5. The file still is subdivided into section, using the familiar curly braces. The most important sections are these:

  • Blacklist
  • Defaults
  • Devices
  • Multipaths

The first section-blacklist {}-specifically excludes devices from being part of the multipathing configuration. This is necessary for local devices that should not be part of the configuration. A new directive, find_multipaths provides the administrator with some help in regards to blacklisted devices. Unlike the multipathing software in Oracle Linux 5, which tried to create a new pseudo-device for every path it encountered, this behavior can be kept in check without explicit blacklisting, using find_multipaths.

The next sections-defaults and device–are hugely vendor specific. Every storage vendor keeps information about the multipath.conf file and their storage products in their support portals. It is strongly recommended to either raise a call with your vendor or consult his documentation for the defaults and devices section. Interestingly the defaults section does not need to be supplied at all; the package uses built-in variables for any value not specified in the defaults-{} section. These defaults are documented in the /usr/share/doc/device-mapper-multipath-version/multipath.conf file. On an example system, the following values were used:

defaults {
        find_multipaths     yes
        user_friendly_names yes
}

All other values will be provided from the built-in defaults. This is not true for the devices-{} section, which overrides the defaults for a specific array. For example, the below has been copied from the Oracle Linux 6.4 multipath.conf.defaults file, and can be used for an EMC-Clariion array:

devices {
       device {
               vendor "DGC"
               product ".*"
               product_blacklist "LUNZ"
               path_grouping_policy group_by_prio
               getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
               path_selector "round-robin 0"
               path_checker emc_clariion
               features "1 queue_if_no_path"
               hardware_handler "1 emc"
               prio emc
               failback immediate
               rr_weight uniform
               no_path_retry 60
               rr_min_io 1000
               rr_min_io_rq 1
       }
}

Again, this is highly device-specific and should come from the storage vendor.

The last section, multipaths-{} contains specific mapping instructions for individual LUNs. This becomes clearer with an example:

multipaths {
        multipath {
                wwid "1IET     00010001"
                alias OCR001
        }
        multipath {
                ...
        }
        ...
}

This section is optional; multipath devices will be created even if there are no instructions for mappings between WWID and device name. In the above example, the device with WWID "1IET     00010001" will be configured as /dev/mapper/OCR001. The benefit of adding more human-friendly names is that troubleshooting becomes a lot easier. Instead of having to hunt down the planned purpose for device /dev/mpatha you immediately know why the device has been created. On the other hand there is added overhead involved in maintaining the mapping. Since the device naming can be a little bit confusing, here is a summary of how device names are created when using dm-multipath:

  • If user_friendly_names is set to yes, the device will be created in /dev/mapper/mpath*.
  • If user_friendly_names is set to no, the device will be created as /dev/mapper/WWID which is very unreadable for humans, making it difficult to find a specific device.
  • If a direct mapping exists in the multipaths {} section, the alias name will be used.
  • The internal devices (/dev/mpath* and /dev/dm-*) are always created, but should not be used in the context of the Oracle database.

For those who know the device mapper, you might wonder why there is no mention of the gid, uid, and mode settings. These were very useful in Oracle Linux 5.x to set ownership of LUNs when using ASM. Unfortunately this functionality has been deprecated, and once more UDEV rules have to be used instead. A recent alternative is the use of ASMLib. The use of udev and dm-multipath is a little complicated for an Oracle DBA, partly because LVM and the multipath driver share a very similar interface. Thankfully, the device mapper multipath module takes care of half the work. The rest needs to be done in a rules file.

All rules reside in /etc/udev/rules.d. Enter the directory, and create a new file. Name the file as 61-asm.rules for example. It is important that the file ends in “.rules,” otherwise it will not be parsed. The following setting allows the disks to be discovered by Oracle later, assuming a separation of duties:

KERNEL=="dm-*", PROGRAM="/sbin/scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="1IET
00010001", OWNER="grid", GROUP="asmdba" MODE="0660"

 
KERNEL=="sd*", PROGRAM="/sbin/scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="1IET
00010001", OWNER="grid", GROUP="asmdba" MODE="0660"

You have to restart udev to force the changes to take effect. This can be done without a restart, using the “udevadm trigger” command as root. The last example for this chapter demonstrates the correctness of the setting by launching a disk discovery from the command line. For example:

[grid@server1 OraInstall2013-08-25_05-22-30PM]$ ./ext/bin/kfod disks=all 
> asm_diskstring='/dev/mapper/*p1'
--------------------------------------------------------------------------------
 Disk          Size Path                                     User     Group
================================================================================
   1:       2048 Mb /dev/mapper/OCR001p1                     grid     asmdba
...

As you can see, the setting is correct and the disk is discovered. Oracle executes the kfod command whenever it wants to configure ASM disks.

Summary

The groundwork for installing Oracle has been laid in this chapter. In the first half of the chapter the Oracle Linux 6 installation was described in great lengthto help understand the automated installation, which was explained next. Oracle Linux comes with a great method for automating the installation of the operating system which truly helps building many servers quickly. Combined with DHCP and DDNS servers could potentially be rolled out very quickly. Security constraints usually apply and should be taken seriously however, and new servers should be built in a secure network before they can be hardened and made production ready.

After the installation of the operating system, you need to prepare the server for the Oracle database installation. Additional packages are to be installed, users are to be created and kernel parameters need to be adjusted; all pretty much standard Oracle day-to-day operations. Finally storage setup was described using the device-mapper-multipath package and how it changed in Oracle Linux 6. This should give you a solid foundation to proceed with the next tasks: the installation of the Oracle binaries.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset