Sending data using zabbix_sender

Until now, you have seen how to implement external checks on both the server side and the agent side, which involves moving the workload from the monitoring host to the monitored host. You can understand how both methods in the case of heavy and extensive monitoring are not the best approach since we are thinking of placing Zabbix in a large environment. Most probably, it is better have a server dedicated to all our checks and use those two functionalities for all the checks that are not widely run.

Zabbix provides utilities designed to send data to the server. This utility is zabbix_sender, and with it, you can send the item data to your server using the items of a Zabbix trapper type.

To test the zabbix_sender utility, simply add a Zabbix trapper item to an existing host and run the command:

zabbix_sender -z <zabbixserver> -s <yourhostname> -k <item_key> -o <value>

You will get a response similar to the following:

Info from server: "Processed 1 Failed 0 Total 1 Seconds spent 0.0433214"
sent: 1; skipped: 0; total: 1

You just saw how easy the zabbix_sender utility is to use. That said, now we can dedicate a server to all our resource-intensive scripts.

The new script

Now, we can change the script that has been previously used as an external check and add UserParameter to a new version that sends traps to your Zabbix server.

The core part of the software will be as follows:

  CONNECTION=$( grep $HOST; $CONNFILE | cut -d; -f2) || exit 3;
  RESULT=$( execquery $CONNECTION $QUERY.sql);
  if [ -z "$RESULT" ]; then
         send $HOST $KEY "none"
         exit 0;
  fi
  send $HOST $QUERY "$RESULT"
  exit 0; 

This code executes the following steps:

  1. Retrieving the connection string from a file:
      CONNECTION=$( grep $HOST; $CONNFILE | cut -d; -f2) || exit 3;
  2. Executing the query specified into the $QUER.sql file:
      RESULT=$( execquery $CONNECTION $QUERY.sql);
  3. Checking the result of the query and if it is not empty, sending the value to Zabbix; otherwise, the value is replaced with "none":
    if [ -z "$RESULT" ]; then
             send $HOST $KEY "none"
             exit 0;
      fi
      send $HOST $KEY "$RESULT"

In this code, there are two main functions in play: one is the execquery() function that basically is not changed, and the other is the send() function. The send() function plays a key role in delivering data to the Zabbix server:

send () {
   MYHOST="$1"
   MYKEY="$2"
   MYMSG="$3"
   zabbix_sender -z $ZBX_SERVER -p $ZBX_PORT -s $MYHOST -k $MYKEY -o "$MYMSG"; 
}

This function sends the values passed by using a command line just as with the one already used to test the zabbix_sender utility. The value sent on the server side will have the corresponding item of the trapper kind, and Zabbix will receive and store your data.

Now, to automate the whole check process, you need a wrapper that polls between all your configured Oracle instances, retrieves the data, and sends it to Zabbix. The wrapper acquires the database list and the relative credential to log in from a configuration file, and you need to call your check_ora_sendtrap.sh script recursively.

Writing a wrapper script for check_ora_sendtrap

Since this script will run from crontab, as the first thing, it will properly set up the environment to source a configuration file:

source /etc/zabbix/externalscripts/check_ora/globalcfg

Then, go down to the script directory. Please note that the directory structure has not been changed for compatibility purposes:

cd /etc/zabbix/externalscripts

Then, it begins to execute all the queries against all the databases:

for host in $HOSTS; do
  for query in $QUERIES; do
          ./check_ora_sendtrap.sh -r -i $host -q ${query%.sql} &sleep 5
   done;
   ./check_ora_sendtrap.sh -r -i $host -t &
   sleep 5
   ./check_ora_sendtrap.sh -r -i $host -s &
done;

Note that this script executes all the queries and retrieves the tnsping time and the connection time for each database. There are two environment variables that are used to cycle between hosts and queries; they are populated with two functions:

HOSTS=$(gethosts)
QUERIES=$(getqueries)

The gethost functions retrieve the database name from the configuration file:

/etc/zabbix/externalscripts/check_ora/credentials
gethosts () {
   cd /etc/zabbix/externalscripts/check_ora
   cat credentials | grep -v '^#' | cut -d';' -f 1
}

The getquery function goes down into the directory tree, retrieving all the query files present:

getqueries () {
   cd /etc/zabbix/externalscripts/check_ora
   ls *.sql
}

Now, you only need to schedule the wrapper script on crontab by adding the following entry to your crontab:

*/5 * * * * /etc/zabbix/externalscripts/check_ora_cron.sh

Your Zabbix server will store and graph data.

Note

All the software discussed here is available on SourceForge at https://sourceforge.net/projects/checkora released on GPLv3 and at http://www.smartmarmot.com/.

The pros and cons of the dedicated script server

With this approach, we have a dedicated server that retrieves data. This means you do not overload the server that provides your service or the Zabbix server, and this is really a good point.

Unfortunately, this kind of approach lacks flexibility, and in this specific case, all the items are refreshed every 5 minutes. On the other hand, with the external checks or UserParameter, the refresh rate can vary and be customized per item.

In this particular case, where a database server is involved, there is an observer effect introduced by our script. The query can be as lightweight as you want, but to retrieve an item, sqlplus will ask Oracle for a connection. This connection will be used only for a few seconds (the time needed to retrieve the item), after which the connection is closed. All this workflow basically lacks connection pooling. Using connection pooling, you can perceptibly reduce the observer effect on your database.

Note

Reducing the overhead with connection pooling is a general concept, and it is not tied with a vendor-specific database. Databases, in general, will suffer if they are hammered with frequent requests for a new connection and a close connection.

Pooling the connection is always a good thing to do in general. To better understand the benefit of this methodology, you can consider a complex network with a path that crosses different firewalls and rules before arriving at a destination; this is the clear advantage to have a persistent connection. To have a pool of persistent connections kept valid with keep-alive packed reduces the latency to retrieve the item from your database and, in general, the network workload. Creating a new connection involves the approval process of all the firewalls crossed. Also, you need to bear in mind that, if you are using Oracle, first a connection request is made against the listener that will require a callback once accepted and so on. Unfortunately, connection pooling can't be implemented with the shell components. There are different implementations of connection pooling, but before we go deep into the programming side, it is time to see how Zabbix protocols work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset