CHAPTER 3

image

Configuring an Efficient Environment

After you install the Oracle binaries and create a database, you should configure your environment to enable you to operate efficiently. Regardless of the functionality of graphical database administration tools, DBAs still need to perform many tasks from the OS command line and manually execute SQL statements. A DBA who takes advantage of the OS and SQL has a clear advantage over a DBA who doesn’t.

In any database environment (Oracle, MySQL, and so on) an effective DBA uses advanced OS features to allow you to quickly navigate the directory, locate files, repeat commands, display system bottlenecks, and so forth. To achieve this efficiency, you must be knowledgeable about the OS that houses the database.

In addition to being proficient with the OS, you must also be skillful with the SQL interface into the database. Although you can glean much diagnostic information from graphical interfaces, SQL enables you to take a deeper dive into the internals to do advanced troubleshooting and derive database intelligence.

This chapter lays the foundation for efficient use of the OS and SQL to manage your databases. You can use the following OS and database features to configure your environment for effectiveness:

  • OS variables
  • Shell aliases
  • Shell functions
  • Shell scripts
  • SQL scripts

When you’re in a stressful situation, it’s paramount to have an environment in which you can quickly discern where you are and what accounts you’re using and to have tools that help you quickly identify problems. The techniques described in this chapter are like levers: they provide leverage for doing large amounts of work fast. These tools let you focus on the issues you may be facing instead of verifying your location or worrying about command syntax.

This chapter begins by detailing OS techniques for enabling maximum efficiency. Later sections show how you can use these tools to display environment details automatically, navigate the file system, monitor the database proactively, and triage.

image Tip   Consistently use one OS shell when working on your database servers. I recommend that you use the Bash shell; it contains all the most useful features from the other shells (Korn and C), plus it has additional features that add to its ease of use.

Customizing Your OS Command Prompt

Typically, DBAs work with multiple servers and multiple databases. In these situations, you may have numerous terminals’ sessions open on your screen. You can run the following types of commands to identify your current working environment:

$ hostname -a
$ id
$ who am i
$ echo $ORACLE_SID
$ pwd

To avoid confusion about which server you’re working on, it’s often desirable to configure your command prompt to display information regarding its environment, such as the machine name and database SID. In this example the command prompt name is customized to include the hostname, user, and Oracle SID:

$ PS1='[h:u:${ORACLE_SID}]$ '

The h specifies the hostname. The u specifies the current OS user. $ORACLE_SID contains the current setting for your Oracle instance identifier. Here is the command prompt for this example:

[oracle12c:oracle:o12c]$

The command prompt contains three pieces of important information about the environment: server name, OS username, and database name. When you’re navigating among multiple environments, setting the command prompt can be an invaluable tool for keeping track of where you are and what environment you’re in.

If you want the OS prompt automatically configured when you log in, then you need to set it in a startup file. In a Bash shell environment, you typically use the .bashrc file. This file is normally located in your HOME directory. Place the following line of code in .bashrc:

PS1='[h:u:${ORACLE_SID}]$ '

When you place this line of code in the startup file, then any time you log in to the server, your OS prompt is set automatically for you. In other shells, such as the Korn shell, the .profile file is the startup file.

Depending on your personal preference, you may want to modify the command prompt for your particular needs. For example, many DBAs like the current working directory displayed in the command prompt. To display the current working directory information, add the w variable:

$ PS1='[h:u:w:${ORACLE_SID}]$ '

As you can imagine, a wide variety of options are available for the information shown in the command prompt. Here is another popular format:

$ PS1='[u@${ORACLE_SID}@h:W]$ '

Table 3-1 lists many of the Bash shell variables you can use to customize the OS command prompt.

Table 3-1. Bash Shell Backslash-Escaped Variables Used for Customizing the Command Prompt

Variable Description
a ASCII bell character
d Date in “weekday month day-of-month” format
h Hostname
e ASCII escape character
j Number of jobs managed by the shell
l Base name of the shell’s terminal device
Newline
Carriage return
s Name of the shell
Time in 24-hour HH:MM:SS format
T Time in 12-hour HH:MM:SS format
@ Time in 12-hour am / pm format
A Time in 24-hour HH:MM format
u Current shell
v Version of the Bash shell
V Release of the Bash shell
w Current working directory
W Base name of the current working directory (not the full path)
! History number of command
$ If the effective user identifier (UID) is 0, then displays #; otherwise, displays $

The variables available for use with your command prompt vary somewhat by OS and shell. For example, in a Korn shell environment the hostname variable displays the server name in the OS prompt:

$ export PS1="['hostname']$ "

If you want to include the ORACLE_SID variable within that string, then set it as follows:

$ export PS1=['hostname'':"${ORACLE_SID}"]$ '

Try not to go overboard in terms of how much information you display in the OS prompt. Too much information limits your ability to type in and view commands on one line. As a rule of thumb, minimally you should include the server name and database name displayed in the OS prompt. Having that information readily available will save you from making the mistake of thinking that you’re in one environment when you’re really in another.

Customizing Your SQL Prompt

DBAs frequently use SQL*Plus to perform daily administrative tasks. Often, you’ll work on servers that contain multiple databases. Obviously, each database contains multiple user accounts. When connected to a database, you can run the following commands to verify information such as your username, database connection, and hostname:

SQL> show user;
SQL> select name from v$database;

A more efficient way to determine your username and SID is to set your SQL prompt to display that information; for example,

SQL> SET SQLPROMPT '&_USER.@&_CONNECT_IDENTIFIER.> '

An even more efficient way to configure your SQL prompt is to have it automatically run the SET SQLPROMPT command when you log in to SQL*Plus. Follow these steps to fully automate this:

  1. Create a file named login.sql, and place in it the SET SQLPROMPT command.
  2. Set your SQLPATH OS variable to include the directory location of login.sql. In this example the SQLPATH OS variable is set in the .bashrc OS file, which is executed each time a new shell is logged in to or started. Here is the entry:
    export SQLPATH=$HOME/scripts
  3. Create a file named login.sql in the HOME/scripts directory. Place the following line in the file:
    SET SQLPROMPT '&_USER.@&_CONNECT_IDENTIFIER.> '
  4. To see the result, you can either log out and log back in to your server or source the .bashrc file directly:
    $ . ./.bashrc

Now, log in to SQL. Here is an example of the SQL*Plus prompt:

SYS@devdb1>

If you connect to a different user, this should be reflected in the prompt:

SQL> conn system/foo

The SQL*Plus prompt now displays

SYSTEM@devdb1>

Setting your SQL prompt is an easy way to remind yourself which environment and user you’re currently connected as. This will help prevent you from accidentally running an SQL statement in the wrong environment. The last thing you want is to think you’re in a development environment and then discover that you’ve run a script to delete objects while connected in a production environment.

Table 3-2 contains a complete list of SQL*Plus variables that you can use to customize your prompt.

Table 3-2. Predefined SQL*Plus Variables

Variable Description
_CONNECT_IDENTIFIER Connection identifier, such as the Oracle SID
_DATE Current date
_EDITOR Editor used by the SQL EDIT command
_O_VERSION Oracle version
_O_RELEASE Oracle release
_PRIVILEGE Privilege level of the current connected session
_SQLPLUS_RELEASE SQL*Plus release number
_USER Current connected user

Creating Shortcuts for Frequently Used Commands

In Linux/Unix environments, you can use two common methods to create shortcuts to other commands: create aliases for often repeated commands and use functions to form shortcuts for groups of commands. The following sections describe ways in which you can deploy these two techniques.

Using Aliases

An alias is a simple mechanism for creating a short piece of text that will execute other shell commands. Here is the general syntax:

$ alias <alias_name>='<shell command>'

For instance, when faced with database problems, it’s often useful to create an alias that runs a cd command that places you in the directory containing the database alert log. This example creates an alias (named bdump) that changes the current working directory to where the alert log is located:

$ alias bdump='cd /u01/app/oracle/diag/rdbms/o12c/o12c/trace'

Now, instead of having to type the cd command, along with a lengthy (and easily forgettable) directory path, you can simply type in bdump and are placed in the specified directory:

$ bdump
$ pwd
/u01/app/oracle/diag/rdbms/o12c/o12c/trace

The prior technique allows you to navigate to the directory of interest efficiently and accurately. This is especially handy when you manage many different databases on different servers. You simply have to set up a standard set of aliases that allow you to navigate and work more efficiently.

To show all aliases that have been defined, use the alias command, with no arguments:

$ alias

Listed next are some common examples of alias definitions you can use:

alias l.='ls -d .*'
alias ll='ls -l'
alias lsd='ls -altr | grep ^d'
alias sqlp='sqlplus "/ as sysdba"'
alias shutdb='echo "shutdown immediate;" | sqlp'
alias startdb='echo "startup;" | sqlp'

If you want to remove an alias definition from your current environment, use the unalias command. The following example removes the alias for lsd:

$ unalias lsd

LOCATING THE ALERT LOG

In Oracle Database 11g and higher, the alert log directory path has this structure:

ORACLE_BASE/diag/rdbms/LOWER(<db_unique_name>)/<instance_name>/trace

Usually (but not always) the db_unique_name is the same as the instance_name. In Data Guard environments the db_unique_name will often not be the same as the instance_name. You can verify the directory path with this query:

SQL> select value from v$diag_info where name = 'Diag Trace';

The name of the alert log follows this format:

alert_<ORACLE_SID>.log

You can also locate the alert log from the OS (whether the database is started or not) via these OS commands:

$ cd $ORACLE_BASE
$ find . -name alert_<ORACLE_SID>.log

In the prior find command you’ll need to replace the <ORACLE_SID> value with the name of your database.

Using a Function

Much like an alias, you can also use a function to form command shortcuts. A function is defined with this general syntax:

$ function <function_name> {
  shell commands
}

For example, the following line of code creates a simple function (named bdump) that allows you to change the current working directory, dependent on the name of the database passed in:

function bdump {
if [ "$1" = "engdev" ]; then
  cd /orahome/app/oracle/diag/rdbms/engdev/ENGDEV/trace
elif [ "$1" = "stage" ]; then
  cd /orahome/app/oracle/diag/rdbms/stage/STAGE/trace
fi
echo "Changing directories to $1 Diag Trace directory"
pwd
}

You can now type bdump, followed by a database name at the command line, to change your working directory to the Oracle background dump directory:

$ bdump stage
Changing directories to stage Diag Trace directory
/orahome/app/oracle/diag/rdbms/stage/STAGE/trace

Using functions is usually preferable to using aliases. Functions are more powerful than aliases because of features such as the ability to operate on parameters passed in on the command line and allowing for multiple lines of code and therefore more complex coding.

DBAs commonly establish functions by setting them in the HOME/.bashrc file. A better way to manage functions is to create a file that stores only function code and call that file from the .bashrc file. It’s also better to store special purpose files in directories that you’ve created for these files. For instance, create a directory named bin under HOME. Then, in the bin directory, create a file named dba_fcns, and place in it your function code. Now, call the dba_fcns file from the .bashrc file. Here is an example of an entry in a .bashrc file:

. $HOME/bin/dba_fcns

Listed next is a small sample of some of the types of functions you can use:

# show environment variables in sorted list
  function envs {
    if test -z "$1"
      then /bin/env | /bin/sort
      else /bin/env | /bin/sort | /bin/grep -i $1
    fi
  } # envs
#-----------------------------------------------------------#
# find largest files below this point
function flf {
  find . -ls | sort -nrk7 | head10
}
#-----------------------------------------------------------#
# find largest directories consuming space below this point
function fld {
  du -S . | sort -nr | head10
}
#-----------------------------------------------------------#
function bdump {
  if [ $ORACLE_SID = "o12c" ]; then
    cd /u01/app/oracle/diag/rdbms/o12c/o12c/trace
  elif [ $ORACLE_SID = "CDB1" ]; then
    cd /u01/app/oracle/diag/rdbms/cdb1/CDB1/trace
  elif [ $ORACLE_SID = "rcat" ]; then
    cd /u01/app/oracle/diag/rdbms/rcat/rcat/trace
  fi
  pwd
} # bdump

If you ever wonder whether a shortcut is an alias or a function, use the type command to verify a command’s origin. This example verifies that bdump is a function:

$ type bdump

Rerunning Commands Quickly

When there are problems with a database server, you need to be able to quickly run commands from the OS prompt. You may be having some sort of performance issue and want to run commands that navigate you to directories that contain log files, or you may want to display the top consuming processes from time to time. In these situations, you don’t want to waste time having to retype command sequences.

One time-saving feature of the Bash shell is that it has several methods for editing and rerunning previously executed commands. The following list highlights several options available for manipulating previously typed commands:

  • Scrolling with the up and down arrow keys
  • Using Ctrl+P and Ctrl+N
  • Listing the command history
  • Searching in reverse
  • Setting the command editor

Each of these techniques is described briefly in the following sections.

Scrolling with the Up and Down Arrow Keys

You can use the up arrow to scroll up through your recent command history. As you scroll through previously run commands, you can rerun a desired command by pressing the Enter or Return key.

If you want to edit a command, use the Backspace key to erase characters, or use the left arrow to navigate to the desired location in the command text. After you’ve scrolled up through command stack, use the down arrow to scroll back down through previously viewed commands.

image Note   If you’re familiar with Windows, scrolling through the command stack is similar to using the DOSKEY utility.

Using Ctrl+P and Ctrl+N

The Ctrl+P keystroke (pressing the Ctrl and P keys at the same time) displays your previously entered command. If you’ve pressed Ctrl+P several times, you can scroll back down the command stack by pressing Ctrl+N (pressing the Ctrl and N keys at the same time).

Listing the Command History

You can use the history command to display commands that the user previously entered:

$ history

Depending on how many commands have previously been executed, you may see a lengthy stack. You can limit the output to the last n number of commands by providing a number with the command. For example, the following query lists the last five commands that were run:

$ history 5

Here is some sample output:

273  cd -
274  grep -i ora alert.log
275  ssh -Y -l oracle 65.217.177.98
276  pwd
277  history 5

To run a previously listed command in the output, use an exclamation point (!) (sometimes called the bang) followed by the history number. In this example, to run the pwd command on line 276, use !, as follows:

$ !276

To run the last command you ran, use !!, as shown here:

$ !!

Searching in Reverse

Press Ctrl+R, and you’re presented with the Bash shell reverse-search utility:

$ (reverse-i-search)'':

From the reverse-i-search prompt, as you type each letter, the tool automatically searches through previously run commands that have text similar to the string you entered. As soon as you’re presented with the desired command match, you can rerun the command by pressing the Enter or Return key. To view all commands that match a string, press Ctrl+R repeatedly. To exit the reverse search, press Ctrl+C.

Setting the Command Editor

You can use the set -o command to make your command-line editor be either vi or emacs. This example sets the command-line editor to be vi:

$ set -o vi

Now, when you press Esc+K, you’re placed in a mode in which you can use vi commands to search through the stack of previously entered commands.

For example, if you want to scroll up the command stack, you can use the K key; similarly, you can scroll down using the J key. When in this mode you can use the slash (/) key and then type a string to be searched for in the entire command stack.

image Tip   Before you attempt to use the command editor feature, be sure you’re thoroughly familiar with either the vi or emacs editor.

A short example will illustrate the power of this feature. Say you know that you ran the ls -altr command about an hour ago. You want to run it again, but this time without the r (reverse-sort) option. To enter the command stack, press Esc+K:

$ Esc+K

You should now see the last command you executed. To search the command stack for the ls command, type /ls, and then press Enter or Return:

$ /ls

The most recently executed ls command appears at the prompt:

$ ls -altr

To remove the r option, use the right arrow key to place the prompt over the r on the screen, and press X to remove the r from the end of the command. After you’ve edited the command, press the Enter or Return key to execute it.

Developing Standard Scripts

I’ve worked in shops where the database administration team developed hundreds of scripts and utilities to help manage an environment. One company had a small squad of DBAs whose job function was to maintain the environmental scripts. I think that’s overkill. I tend to use a small set of focused scripts, with each script usually less than 50 lines long. If you develop a script that another DBA can’t understand or maintain, then it loses its effectiveness.

image Note   All the scripts in this chapter are available for download from the Source Code/Download area of the Apress web site (www.apress.com ).

This section contains several short shell functions, shell scripts, and SQL scripts that can help you manage a database environment. This is by no means a complete list of scripts—rather, it provides a starting point from which you can build. Each subsection heading is the name of a script.

image Note   Before you attempt to run a shell script, ensure that it’s executable. Use the chmod command to achieve this: chmod 750 <script>.

dba_setup

Usually, you’ll establish a common set of OS variables and aliases in the same manner for every database server. When navigating among servers, you should set these variables and aliases in a consistent and repeatable manner. Doing so helps you (or your team) operate efficiently in every environment. For example, it’s extremely useful to have the OS prompt set in a consistent way when you work with dozens of different servers. This helps you quickly identify what box you’re on, which OS user you’re logged in as, and so on.

One technique is to store these standard settings in a script and then have that script executed automatically when you log in to a server. I usually create a script named dba_setup to set these OS variables and aliases. You can place this script in a directory such as HOME/bin and automatically execute the script via a startup script (see the section “Organizing Scripts,” later in this chapter). Here are the contents of a typical dba_setup script:

# set prompt
PS1='[h:u:${ORACLE_SID}]$ '
#
export EDITOR=vi
export VISUAL=$EDITOR
export SQLPATH=$HOME/scripts
set -o vi
#
# list directories only
alias lsd="ls -p | grep /"
# show top cpu consuming processes
alias topc="ps -e -o pcpu,pid,user,tty,args | sort -n -k 1 -r | head"
# show top memory consuming processes
alias topm="ps -e -o pmem,pid,user,tty,args | sort -n -k 1 -r | head"
#
alias sqlp='sqlplus "/ as sysdba"'
alias shutdb='echo "shutdown immediate;" | sqlp'
alias startdb='echo "startup;" | sqlp'

dba_fcns

Use this script to store OS functions that help you navigate and operate in your database environment. Functions tend to have more functionality than do aliases. You can be quite creative with the number and complexity of functions you use. The idea is that you want a consistent and standard set of functions that you can call, no matter which database server you’re logged in to.

Place this script in a directory such as HOME/bin. Usually, you’ll have this script automatically called when you log in to a server via a startup script (see the section “Organizing Scripts,” later in this chapter). Here are some typical functions you can use:

#-----------------------------------------------------------#
# show environment variables in sorted list
  function envs {
    if test -z "$1"
      then /bin/env | /bin/sort
      else /bin/env | /bin/sort | /bin/grep -i $1
    fi
  } # envs
#-----------------------------------------------------------#
# login to sqlplus
  function sp {
    time sqlplus "/ as sysdba"
  } # sp
#-----------------------------------------------------------#
# find largest files below this point
function flf {
  find . -ls | sort -nrk7 | head10
}
#-----------------------------------------------------------#
# find largest directories consuming space below this point
function fld {
  du -S . | sort -nr | head10
}
#-----------------------------------------------------------#
# change directories to directory containing alert log file
function bdump {
   cd /u01/app/oracle/diag/rdbms/o12c/o12c/trace
  } # bdump
#-----------------------------------------------------------#

tbsp_chk.bsh

This script checks to see if any tablespaces are surpassing a certain fullness threshold. Store this script in a directory such as HOME/bin. Make sure you modify the script to contain the correct username, password, and e-mail address for your environment.

You also need to establish the required OS variables, such as ORACLE_SID and ORACLE_HOME. You can either hard-code those variables into the script or call a script that sources the variables for you. The next script calls a script (named oraset) that sets the OS variables (see Chapter 2 for the details of this script). You don’t have to use this script—the idea is to have a consistent and repeatable way of establishing OS variables for your environment.

You can run this script from the command line. In this example I passed it the database name (o12c) and wanted to see what tablespaces had less than 20 percent space left:

$ tbsp_chk.bsh o12c 20

The output indicates that two tablespaces for this database have less than 20 percent space left:

space not okay
0 % free UNDOTBS1, 17 % free SYSAUX,

Here are the contents of the tbsp_chk.bsh script:

#!/bin/bash
#
if [ $# -ne 2 ]; then
  echo "Usage: $0 SID threshold"
  exit 1
fi
# either hard code OS variables or source them from a script.
# see Chapter 2 for details on using oraset to source oracle OS variables
. /var/opt/oracle/oraset $1
#
crit_var=$(
sqlplus -s <<EOF
system/foo
SET HEAD OFF TERM OFF FEED OFF VERIFY OFF
COL pct_free FORMAT 999
SELECT (f.bytes/a.bytes)*100 pct_free,'% free',a.tablespace_name||','
FROM
(SELECT NVL(SUM(bytes),0) bytes, x.tablespace_name
FROM dba_free_space y, dba_tablespaces x
WHERE x.tablespace_name = y.tablespace_name(+)
AND x.contents != 'TEMPORARY' AND x.status != 'READ ONLY'
AND x.tablespace_name  NOT LIKE 'UNDO%'
GROUP BY x.tablespace_name) f,
(SELECT SUM(bytes) bytes, tablespace_name
FROM dba_data_files
GROUP BY tablespace_name) a
WHERE a.tablespace_name = f.tablespace_name
AND  (f.bytes/a.bytes)*100 <= $2
ORDER BY 1;
EXIT;
EOF)
if [ "$crit_var" = "" ]; then
  echo "space okay"
else
  echo "space not okay"
  echo $crit_var
  echo $crit_var | mailx -s "tbsp getting full on $1" [email protected]
fi
exit 0

Usually, you run a script such as this automatically, on a periodic basis, from a scheduling utility, such as cron. Here is a typical cron entry that runs the script once an hour:

# Tablespace check
2 * * * * /orahome/bin/tbsp_chk.bsh INVPRD 10 1>/orahome/bin/log/tbsp_chk.log 2>&1

This cron entry runs the job and stores any informational output in the tbsp_chk.log file.

When running tbsp_chk.bsh in an Oracle Database 12c pluggable database environment from the root container, you’ll need to reference the CDB_* views rather than the DBA_* views for the script to properly report on space regarding all pluggable databases (within the container database). You should also consider adding the NAME and CON_ID to the query so that you can view which pluggable database is potentially having space issues; for example,

SET LINES 132 PAGES 100
COL name FORM A10
COL tablespace_name FORM A15
COL file_name FORM A40
--
SELECT
  c.name, t.tablespace_name,
  NVL(SUM(f.bytes),0) free_bytes, SUM(d.bytes) alloc_bytes, d.file_name
FROM v$containers    c
    ,cdb_tablespaces t
    ,cdb_free_space  f
    ,cdb_data_files  d
WHERE t.tablespace_name = f.tablespace_name(+)
AND c.con_id = t.con_id
AND c.con_id = f.con_id
AND c.con_id = d.con_id
AND t.tablespace_name = d.tablespace_name
GROUP BY c.name, t.tablespace_name, d.file_name
ORDER BY 1,2,5;

conn.bsh

You need to be alerted if there are issues with connecting to databases. This script checks to see if a connection can be established to the database. If a connection can’t be established, an e-mail is sent. Place this script in a directory such as HOME/bin. Make sure you modify the script to contain the correct username, password, and e-mail address for your environment.

You also need to establish the required OS variables, such as ORACLE_SID and ORACLE_HOME. You can either hard-code those variables into the script or call a script that sources the variables for you. Like the previous script, this script calls a script (named oraset) that sets the OS variables (see Chapter 2).

The script requires that the ORACLE_SID be passed to it; for example,

$ conn.bsh INVPRD

If the script can establish a connection to the database, the following message is displayed:

success
db ok

Here are the contents of the conn.bsh script:

#!/bin/bash
if [ $# -ne 1 ]; then
  echo "Usage: $0 SID"
  exit 1
fi
# either hard code OS variables or source them from a script.
# see Chapter 2 for details on oraset script to source OS variables
. /etc/oraset $1
#
echo "select 'success' from dual;" | sqlplus -s system/foo@o12c | grep success
if [[ $? -ne 0 ]]; then
  echo "problem with $1" | mailx -s "db problem" [email protected]
else
  echo "db ok"
fi
#
exit 0

This script is usually automated via a utility such as cron. Here is a typical cron entry:

# Check to connect to db.
18 * * * * /home/oracle/bin/conn.bsh o12c 1>/home/oracle/bin/log/conn.log 2>&1

This cron entry runs the script once per hour. Depending on your availability requirements, you may want to run a script such as this on a more frequent basis.

filesp.bsh

Use the following script to check for an operating point that is filling up. Place the script in a directory such as HOME/bin. You need to modify the script so that the mntlist variable contains a list of mount points that exist on your database server. Because this script isn’t running any Oracle utilities, there is no reason to set the Oracle-related OS variables (as with the previous shell scripts):

#!/bin/bash
mntlist="/orahome /ora01 /ora02 /ora03"
for ml in $mntlist
do
echo $ml
usedSpc=$(df -h $ml | awk '{print $5}' | grep -v capacity | cut -d "%" -f1 -)
BOX=$(uname -a | awk '{print $2}')
#
case $usedSpc in
[0–9])
arcStat="relax, lots of disk space: $usedSpc"
;;
[1–7][0–9])
arcStat="disk space okay: $usedSpc"
;;
[8][0–9])
arcStat="space getting low: $usedSpc"
echo $arcStat | mailx -s "space on: $BOX" [email protected]
;;
[9][0–9])
arcStat="warning, running out of space: $usedSpc"
echo $arcStat | mailx -s "space on: $BOX" [email protected]
;;
[1][0][0])
arcStat="update resume, no space left: $usedSpc"
echo $arcStat | mailx -s "space on: $BOX" [email protected]
;;
*)
arcStat="huh?: $usedSpc"
esac
#
BOX=$(uname -a | awk '{print $2}')
echo $arcStat
#
done
#
exit 0

You can run this script manually from the command line, like this:

$ filesp.bsh

Here is the output for this database server:

/orahome
disk space okay: 79
/ora01
space getting low: 84
/ora02
disk space okay: 41
/ora03
relax, lots of disk space: 9

This is the type of script you should run on an automated basis from a scheduling utility such as cron. Here is a typical cron entry:

# Filesystem check
7 * * * * /orahome/bin/filesp.bsh 1>/orahome/bin/log/filesp.log 2>&1

Keep in mind that the shell script used in this section (filesp.bsh) may require modification for your environment. The shell script is dependent on the output of the df -h command, which does vary by OS and version. For instance, on a Solaris box the output of df -h appears as follows:

$ df -h
Filesystem             size   used  avail capacity  Mounted on
/ora01                  50G    42G   8.2G    84%    /ora01
/ora02                  50G    20G    30G    41%    /ora02
/ora03                  50G   4.5G    46G     9%    /ora03
/orahome                30G    24G   6.5G    79%    /orahome

This line in the shell script selectively reports on the “capacity” in the output of the df -h command:

usedSpc=$(df -h $ml | awk '{print $5}' | grep -v capacity | cut -d "%" -f1 -)

For your environment you’ll have to modify the prior line to correctly extract the information related to disk space remaining per mount point. For example, say you’re on a Linux box and issue a df -h command, and you observe the following output:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      222G  162G   49G  77% /

There’s only one mount point, and the disk space percentage is associated with the “Use%” column. Therefore, to extract the pertinent information, you’ll need to modify the code associated with usedSpc within the shell script; for example,

df -h / | grep % | grep -v Use | awk '{print $4}' | cut -d "%" -f1 -

The shell script will thus need to have the following lines modified, as shown:

mntlist="/"
for ml in $mntlist
do
echo $ml
usedSpc=$(df -h / | grep % | grep -v Use | awk '{print $4}' | cut -d "%" -f1 -)

login.sql

Use this script to customize aspects of your SQL*Plus environment. When logging in to SQL*Plus in Linux/Unix, the login.sql script is automatically executed if it exists in a directory contained within the SQLPATH variable. If the SQLPATH variable hasn’t been defined, then SQL*Plus looks for login.sql in the current working directory from which SQL*Plus was invoked. For instance, here is how the SQLPATH variable is defined in my environment:

$ echo $SQLPATH
/home/oracle/scripts

I created the login.sql script in the /home/oracle/scripts directory. It contains the following lines:

-- set SQL prompt
SET SQLPROMPT '&_USER.@&_CONNECT_IDENTIFIER.> '

Now, when I log in to SQL*Plus, my prompt is automatically set:

$ sqlplus / as sysdba
SYS@o12c>

top.sql

The following script lists the top CPU-consuming SQL processes. It’s useful for identifying problem SQL statements. Place this script in a directory such as HOME/scripts:

select * from(
select
 sql_text
,buffer_gets
,disk_reads
,sorts
,cpu_time/1000000 cpu_sec
,executions
,rows_processed
from v$sqlstats
order by cpu_time DESC)
where rownum < 11;

This is how you execute this script:

SQL> @top

Here is a snippet of the output, showing an SQL statement that is consuming a large amount of database resources:

INSERT INTO "REP_MV"."GEM_COMPANY_MV"
SELECT   CASE GROUPING_ID(trim(upper(nvl(ad.organization_name,u.company))))
WHEN 0 THEN
trim(upper(nvl(ad.organization_name,u.company)))
  
11004839   20937562        136   21823.59         17       12926019

lock.sql

This script displays sessions that have locks on tables that are preventing other sessions from completing work. The script shows details about the blocking and waiting sessions. You should place this script in a directory such as HOME/scripts. Here are the contents of lock.sql:

SET LINES 83 PAGES 30
COL blkg_user    FORM a10
COL blkg_machine FORM a10
COL blkg_sid     FORM 99999999
COL wait_user    FORM a10
COL wait_machine FORM a10
COL wait_sid     FORM 9999999
COL obj_own      FORM a10
COL obj_name     FORM a10
--
SELECT
 s1.username    blkg_user
,s1.machine     blkg_machine
,s1.sid         blkg_sid
,s1.serial#     blkg_serialnum
,s1.sid || ',' || s1.serial# kill_string
,s2.username    wait_user
,s2.machine     wait_machine
,s2.sid         wait_sid
,s2.serial#     wait_serialnum
,lo.object_id   blkd_obj_id
,do.owner       obj_own
,do.object_name obj_name
FROM v$lock l1
    ,v$session s1
    ,v$lock l2
    ,v$session s2
    ,v$locked_object lo
    ,dba_objects do
WHERE s1.sid = l1.sid
AND   s2.sid = l2.sid
AND   l1.id1 = l2.id1
AND   s1.sid = lo.session_id
AND   lo.object_id = do.object_id
AND   l1.block = 1
AND   l2.request > 0;

The lock.sql script is useful for determining what session has a lock on an object and also for showing the blocked session. You can run this script from SQL*Plus, as follows:

SQL> @lock.sql

Here is a partial listing of the output (truncated so that it fits on one page):

BLKG_USER  BLKG_MACHI  BLKG_SID BLKG_SERIALNUM
---------- ---------- --------- --------------
KILL_STRING
------------------------------------------------------------------------------
WAIT_USER  WAIT_MACHI WAIT_SID WAIT_SERIALNUM BLKD_OBJ_ID OBJ_OWN    OBJ_NAME
---------- ---------- -------- -------------- ----------- ---------- ----------
MV_MAINT   speed             24             11
24,11
MV_MAINT   speed            87              7       19095 MV_MAINT   INV

When running lock.sql in an Oracle Database 12c pluggable database environment from the root container, you’ll need to change DBA_OBJECTS to CDB_OBJECTS for the script to properly report locks throughout the entire database. You should also consider adding the NAME and CON_ID to the query so that you can view the container in which the lock is occurring. Here’s a snippet of the modified query (you’ll need to replace the “…” with columns you want to report on):

SELECT
 u.name
,s1.username    blkg_user
. . .
,do.object_name obj_name
FROM v$lock l1
    ,v$session s1
    ,v$lock l2
    ,v$session s2
    ,v$locked_object lo
    ,cdb_objects do
    ,v$containers u
WHERE s1.sid = l1.sid
AND   s2.sid = l2.sid
AND   l1.id1 = l2.id1
AND   s1.sid = lo.session_id
AND   lo.object_id = do.object_id
AND   l1.block = 1
AND   l2.request > 0
AND   do.con_id = u.con_id;

users.sql

This script displays information about when users were created and whether their account is locked. The script is useful when you’re troubleshooting connectivity issues. Place the script in a directory such as HOME/scripts. Here is a typical users.sql script for displaying user account information:

SELECT
  username
 ,account_status
 ,lock_date
 ,created
FROM dba_users
ORDER BY username;

You can execute this script from SQL*Plus, as follows:

SQL> @users.sql

Here is some sample output:

USERNAME        ACCOUNT_ST LOCK_DATE    CREATED
--------------- ---------- ------------ ------------
SYS             OPEN                    09-NOV-12
SYSBACKUP       OPEN                    09-NOV-12
SYSDG           OPEN                    09-NOV-12

When running users.sql in an Oracle Database 12c pluggable database environment from the root container, you’ll need to change DBA_USERS to CDB_USERS and add the NAME and CON_ID columns to report on all users in all pluggable databases; for example,

SELECT
  c.name
 ,u.username
 ,u.account_status
 ,u.lock_date
 ,u.created
FROM cdb_users    u
    ,v$containers c
WHERE u.con_id = c.con_id
ORDER BY c.name, u.username;

Organizing Scripts

When you have a set of scripts and utilities, you should organize them such that they’re consistently implemented for each database server. Follow these steps to implement the preceding DBA utilities for each database server in your environment:

  1. Create OS directories in which to store the scripts.
  2. Copy your scripts and utilities to the directories created in step 1.
  3. Configure your startup file to initialize the environment.

These steps are detailed in the following sections.

Step 1: Create Directories

Create a standard set of directories on each database server to store your custom scripts. A directory beneath the HOME directory of the oracle user is usually a good location. I generally create the following three directories:

  • HOME/bin. Standard location for shell scripts that are run in an automated fashion (such as from cron).
  • HOME/bin/log. Standard location for log files generated from the scheduled shell scripts.
  • HOME/scripts. Standard location for storing SQL scripts.

You can use the mk dir command to create the previous directories, as follows:

$ mkdir -p $HOME/bin/log
$ mkdir $HOME/scripts

It doesn’t matter where you place the scripts or what you name the directories, as long as you have a standard location so that when you navigate from server to server, you always find the same files in the same locations. In other words, it doesn’t matter what the standard is, only that you have a standard.

Step 2: Copy Files to Directories

Place your utilities and scripts in the appropriate directories. Copy the following files to the HOME/bin directory:

dba_setup
dba_fcns
tbsp_chk.bsh
conn.bsh
filesp.bsh

Place the following SQL scripts in the HOME/scripts directory:

login.sql
top.sql
lock.sql
users.sql

Step 3: Configure the Startup File

Place the following code in the . bashrc file or the equivalent startup file for the shell you use (.profile for the Korn shell). Here is an example of how to configure the .bashrc file:

# Source global definitions
if [f /etc/bashrc ]; then
        . /etc/bashrc
fi
#
# source oracle OS variables
. /etc/oraset <default_database>
#
# User specific aliases and functions
. $HOME/bin/dba_setup
. $HOME/bin/dba_fcns

Now, each time you log in to an environment, you have full access to all the OS variables, aliases, and functions established in the dba_setup and dba_fcns files. If you don’t want to log off and back in, then run the file manually, using the dot (.) command. This command executes the lines contained within a file. The following example runs the .bashrc file:

$ . $HOME/.bashrc

The dot instructs the shell to source the script. Sourcing tells the shell process you’re currently logged in to, to inherit any variables set with an export command in an executed script. If you don’t use the dot notation, then the variables set within the script are visible only in the context of the subshell that is spawned when the script is executed.

image Note   In the Bash shell the source command is equivalent to the dot (.) command.

Summary

This chapter described how to configure an efficient environment. This is especially important for DBAs who manage multiple databases on multiple servers. Regular maintenance and troubleshooting activities require you to log in directly to the database server. To promote efficiency and sanity, you should develop a standard set of OS tools and SQL scripts that help you maintain multiple environments. You can use standard features of the OS to assist with navigating, repeating commands, showing system bottlenecks, quickly finding critical files, and so on.

The techniques for configuring a standard OS are especially useful when you’re working on multiple servers with multiple databases. When you have multiple terminal sessions running simultaneously, it’s easy to lose your bearings and forget which session is associated with a particular server and database. With just a small amount of setup, you can make certain that your OS prompt always shows information such as the host and database. Likewise, you can always set your SQL prompt to show the username and database connection. These techniques help ensure that you don’t accidentally run a command or script in the wrong environment.

After you have installed the Oracle binaries, created a database, and configured your environment, you are ready to perform additional database administration tasks, such as creating tablespaces for the applications. The topic of tablespace creation and maintenance is discussed in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset