Let's start with the simpler case, monitoring a single file. To do so, we could create a couple of test files. To keep things a bit organized, let's create a directory, /tmp/zabbix_logmon/, on A test host and create two files in there, logfile1 and logfile2. For both files, use the same content as this:
2018-08-13 13:01:03 a log entry
2018-08-13 13:02:04 second log entry
2018-08-13 13:03:05 third log entry
With the files in place, let's proceed to creating items:
- Navigate to Configuration | Hosts, click on Items next to A test host, then click on Create item. Fill in the following:
-
- Name: First logfile
- Type: Zabbix agent (active)
- Key: log[/tmp/zabbix_logmon/logfile1]
- Type of information: Log
- Update interval: 1s
- When done, click on the Add button at the bottom.
As mentioned earlier, log monitoring only works as an active item, so we used that item type. For the key, the first parameter is required; it's the full path to the file we want to monitor. We also used a special type of information here, log. But what about the update interval, why did we use such a small interval of one second? For log items, this interval isn't about making an actual connection between the agent and the server; it's only about the agent checking whether the file has changed: it does a stat() call, similar to what tail -f does on some platforms/filesystems. A connection to the server is only made when the agent has anything to send in.
With the item in place, it shouldn't take longer than three minutes for the data to arriveāif everything works as expected, of course. Up to one minute could be required for the server to update the configuration cache, and up to two minutes could be required for the active agent to update its list of items. Let's verify this: navigate to Monitoring | Latest data and filter by host, A test host. Our First logfile item should be there, and it should have some value as well:
As with other non-numeric items, Zabbix knows that it can't graph logs, hence there's a History link on the right-hand side; let's click on it:
All of the lines from our log file are here. By default, Zabbix log monitoring parses whole files from the very beginning. That's good in this case, but what if we wanted to start monitoring some huge existing log file? Not only would that parsing be wasteful, we would also likely send lots of useless old information to the Zabbix server. Luckily, there's a way to tell Zabbix to only parse new data since the monitoring of that log file started. We could try that out with our second file and, to keep things simple, we could also clone our first item. Let's proceed with the following steps:
- Navigate to Configuration | Hosts, click on Items next to A test host, then click on First logfile in the Name column. At the bottom of the item configuration form, click on Clone and make the following changes:
-
- Name: Second logfile
- Key: log[/tmp/zabbix_logmon/logfile2,,,,skip]
- When done, click on the Add button at the bottom.
The same as before, it might take up to three minutes for this item to start working. Even when it starts working, there will be nothing to see in the latest data page; we specified the skip parameter and hence only new lines would be considered.
To test this, we could add some lines to Second logfile. On A test host, execute the following:
$ echo "2018-12-1 10:34:05 fourth log entry" >> /tmp/zabbix_logmon/logfile2
A moment later, this entry should appear in the latest data page:
If we check the item history, it's the only entry, as Zabbix only cares about new lines now.