Categories: docker, logs, software architecture

How can I centralize and analyze docker container logs?

Reading Time: 4 minutes

Introduction

Log centralization is key as more and more firms are adopting a microservice architecture within their organisations to replace their Monolith’s. Parallel services are no longer necessarily hosted on the same node which means that manual log aggregation would be long and tiresome. With the expansion of Docker technologies to support Orchestration platforms such as Kubernetes and Docker Swarm, it becomes apparent that we need one area to centralize all of our logs.

Docker is the technology of choice, why?

because it makes it possible to gain more applications running on the same old servers and makes it  very easy to package and ship programs.

Addressing the problem

Logs are vital. Especially for developers? So why is this possible? Because fixing a problem without them can be an impossible task losing the company time and money. Developers also need to trace requests from start to end, especially within a microservice architecture in order to ascertain  the problem they are debugging.

Out of the box docker contains a default logging driver, “syslog” which writes logging messages to the syslog facility by routing the logs to a syslog server. The syslog daemon however must be running on the host machine. Syslog is one logging solution however this comes with limitations such as only supporting a limited set of metadata.

A better solution would be to use a metadata rich centralized logging platform within docker which is fast, allows traceability and provides community support.

This is where the ELK stack comes in.

Solution: The ELK Stack

ELK stack is the acronym for three open source projects: Elastic Search, Log stash and Kibana. Elasticsearch is an analytics and search engine. Logstash is a server-side data processing pipeline that is capable of obtaining data from multiple sources, transforming it, and then storing it in a system such as Elasticsearch. Kibana lets users visualize data with charts and graphs.

An overview of how this can work on our stack

Installing the stack requires resources and a properly formatted configuration. Lets have a look at a diagram

This is the flow. The Docker Daemon can directly connect to Logstash by the configuring the logging driver located in /etc/docker/daemon.json (for linux hosts) – Hint – Inspecting the Logstash container will provide you with an internal IP of LogStash.

{"log-driver": "gelf","log-opts": {"gelf-address": "udp://{LOGSTASH_IP}:12201"}

Once this has been done and docker has been restarted, all containers will pump their logs directly through to LogStash in the GELF Format. The GELF logging format can be described as:

The gelf logging driver is a convenient format that is understood by a number of tools such as GraylogLogstash, and Fluentd

Docker Logging Documentation

In GELF, every log message contains the following fields:
1. version
2. host (who sent the message in the first place)
3. timestamp
4. short and long version of the message
5. any custom fields you configure yourself

Adding the ELK Stack

We can use a docker-compose file to spin up ElasticSearch, LogStash and Kibana. A practices to think about:-

  • The ELK Stack should all be on the same network
  • A volume should be created for the ElasticSearch data
  • Specify the maximum amount of processing ElasticSearch can perform using ulimits
  • Specify a placement constraint so that the ELK stack does not sit on the same node as your running applications (placement constraints)
  • Kibana – Specify the ElasticSearch Environment Variable
  • Kibana – Include a healthcheck URL at root
  • LogStash – Specify that GELF is the logging driver used for the input and ElasticSearch is the output
  • An area for consideration – periodically clearing out your old ElasticSearch data- However this could be performed later (depending on the size available on your server)

Below is an example of a docker-compose file. This could get deployed locally for testing, deployed directly onto Docker Swarm or transpiled into Kubernetes resources using Kompose 

(Please take note that this compose file includes constraints)

elasticsearch:
    command: elasticsearch -Enetwork.host=0.0.0.0 -Ediscovery.zen.ping.unicast.hosts=elasticsearch
    environment:
      ES_JAVA_OPTS: -Xms2g -Xmx2g
    image: elasticsearch:5
    ulimits:
      memlock: -1
      nofile:
        hard: 65536
        soft: 65536
      nproc: 65538
    volumes:
      - /usr/share/elasticsearch/data
    networks:
      - system-cicd
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.elkStack == true
      endpoint_mode: dnsrr

  kibana:
    image: kibana:latest
    ports:
      - "5601:5601"
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    networks:
      - system-cicd
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.elkStack == true
    healthcheck:
      test: wget -qO- http://localhost:5601 > /dev/null
      interval: 30s
      retries: 3

  logstash:
    hostname: logstash
    command: logstash -e 'input{ gelf{ use_tcp => true }} output{ elasticsearch{ hosts => ["elasticsearch"] } stdout{} }'
    image: docker.elastic.co/logstash/logstash:6.3.0
    ports:
        - "12201:12201"
    networks:
      - system-cicd
    depends_on:
      - elasticsearch
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.elkStack == true

Once a successful deployment or simple launch of this docker-compose file, you should be able to access the Kibana interface via the forwarded port locally or on a swarm, or through Kubernetes service leading to the pod.

Any log in any running container should now be pumped through LogStash via the gelf driver into ElasticSearch which can be visualised.

kibana interface

Conclusion

It can be seen that an ELK stack can be fired up quickly, however it should be carefully tested within a development environment for at least a month before being moved onto any production system.

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *

*