Setting up Graylog

A Graylog server is a Java application that uses MongoDB as its database, and stores all the logs it receives into Elasticsearch. Needless to say, a Graylog stack has quite a lot of moving parts that are hard to set up and administrate, and you will need some dedicated people if you deploy it yourself.

A typical production setup will use a dedicated Elastic Search cluster and several Graylog nodes with a MondoDB instance on each. You can have a look at Graylog architecture documentation (http://docs.graylog.org/en/latest/pages/architecture.html) for more details.

An excellent way to try out Graylog is to use its Docker (https://docs.docker.com) image, as described here in http://docs.graylog.org/en/latest/pages/installation/docker.html.

Chapter 10, Containerized Services, explains how to use Docker for deploying microservices, and gives the basic knowledge you need to build and run Docker images.

Like Sentry, Graylog is backed by a commercial company, which offers some hosting solutions. Depending on your project's nature and size, it can be a good solution to avoid maintaining this infrastructure yourself. For instance, if you run a commercial project that has a Service-Level Agreement (SLA), operating an Elasticsearch cluster smoothly is not a small task and will require some attention.

But for projects that don't generate a lot of logs, or if having the log management down for a bit is not the end of the world, then running your Graylog stack can be a good solution.

For this chapter, we'll just use the Docker image and docker-compose (a tool that can run and bind several docker images from one call) and a minimal set up to demonstrate how our microservices can interact with Graylog.

To run a Graylog service locally, you need to have Docker installed (see Chapter 10, Dockerizing your service) and use the following Docker compose configuration (taken from Graylog documentation):

version: '2' 
services: 
  some-mongo: 
    image: "mongo:3" 
  some-elasticsearch: 
    image: "elasticsearch:2" 
    command: "elasticsearch -Des.cluster.name='graylog'" 
  graylog: 
    image: graylog2/server:2.1.1-1 
    environment: 
      GRAYLOG_PASSWORD_SECRET: somepasswordpepper 
      GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 
      GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api 
    links: 
      - some-mongo:mongo 
      - some-elasticsearch:elasticsearch 
    ports: 
      - "9000:9000" 
      - "12201/udp:12201/udp" 

If you save that file in a docker-compose.yml file and run docker-compose up in the directory containing it, Docker will pull the MongoDB, Elasticsearch and Graylog images and run them.

Once it's running, you can reach the Graylog dashboard at http://localhost:9000 in your browser, and access it with admin as the user and password.

The next step is to go to ;System | Inputs ;to add a new UDP input so Graylog can receive our microservices logs.

This is done by launching a new GELF UDP input on port 12012, as shown in the following screenshot:

Once the new input is in place, Graylog will bind the UDP port 12201 and will be ready to receive data. The docker-compose.yml file has that port exposed for the Graylog image, so your Flask applications can send data via the localhost.

If you click on Show Received Messages for the new input, you will get a search result displaying all the collected logs:

Congratulations! You are now ready to receive logs in a centralized place and watch them live in the Graylog dashboards.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset