Building the Logstash data pipeline

Having set up the mechanism to automatically create the Elasticsearch index and the metadata database, we can now focus on building the data pipeline using Logstash. What should our data pipeline do? It should perform the following steps:

  • Accept JSON requests over the web (over HTTP).
  • Enrich the JSON with the metadata we have in the MySQL database.
  • Store the resulting documents in Elasticsearch.

These three main functions that we want to perform correspond exactly with the Logstash data pipeline's input, filter, and output plugins, respectively. The full Logstash configuration file for this data pipeline is in the code base at https://github.com/pranav-shukla/learningelasticstack/tree/v7.0/chapter-10/files/logstash_sensor_data_http.conf.

Let us look at how to achieve the end goal of our data pipeline by following the aforementioned steps. We will start with accepting JSON requests over the web (over HTTP).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset