Elasticsearch + Fluentd + Kibana Setup (EFK) with Docker

February 08, 2020

3 min read

Elasticsearch + Fluentd + Kibana Setup (EFK) with Docker

In  this article, we will see how to collect Docker logs to EFK (Elasticsearch + Fluentd + Kibana) stack. The example uses Docker Compose for setting up multiple containers.
But before that let us understand that what is Elasticsearch, Fluentd, and kibana.

1. Elasticsearch :- Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

2. Kibana:-  Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data

3. Fluentd:-  Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.

How to setup EFK stack Step by Step :-

STEP 1:- First of all create a docker-compose.yaml file for EFK stack. In this demo here we are using Opendistro docker imagesfor security , but you can use official image.

1version: "3"
2
3services:
4 elasticsearch:
5 image: amazon/opendistro-for-elasticsearch:1.3.0
6 container_name: elasticsearch
7 restart: always
8 environment:
9 - cluster.name=elasticsearch
10 - node.name=elasticsearch
11 - discovery.seed_hosts=elasticsearch
12 - cluster.initial_master_nodes=elasticsearch
13 - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
14 - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
15 - opendistro_security.ssl.http.enabled=false
16 ulimits:
17 memlock:
18 soft: -1
19 hard: -1
20 nofile:
21 soft: 262144 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
22 hard: 262144
23 volumes:
24 - elasticsearch:/usr/share/elasticsearch/data
25 ports:
26 - 9200:9200
27 - 9600:9600 # required for Performance Analyzer
28 networks:
29 - traefik-net
30 kibana:
31 image: yashlogic/amazon-opendistro-for-elasticsearch-kibana-logtrail:1.3.0
32 container_name: kibana
33 restart: always
34 ports:
35 - 5601:5601
36 expose:
37 - "5601"
38 environment:
39 ELASTICSEARCH_URL: http://elasticsearch:9200
40 ELASTICSEARCH_HOSTS: http://elasticsearch:9200
41 networks:
42 - traefik-net
43 fluentd:
44 build: ./fluentd
45 volumes:
46 - ./fluentd/conf:/fluentd/etc
47 links:
48 - "elasticsearch"
49 restart: always
50 container_name: fluentd
51 ports:
52 - "24224:24224"
53 - "24224:24224/udp"
54 networks:
55 - traefik-net
56
57volumes:
58 elasticsearch:
59
60networks:
61 traefik-net:

STEP 2:-  Then create a folder name called fluentd and in that folder create
Dockerfile . it looks like  /fluend/Dockerfile

1# fluentd/Dockerfile
2FROM fluent/fluentd:v1.6-debian-1
3USER root
4RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "3.5.2"]
5USER fluent

STEP 3:-  After that  create a folder conf also create a fluent.conf file inside the fluentd directory. it looks like  /fluend/conf/fluent.conf

1# fluentd/conf/fluent.conf
2
3<source>
4 @type forward
5 port 24224
6 bind 0.0.0.0
7</source>
8<match *.**>
9 @type copy
10 <store>
11 @type elasticsearch_dynamic
12 hosts elasticsearch:9200
13 user admin
14 password admin
15 include_tag_key true
16 type_name access_log
17 tag_key @log_name
18 flush_interval 10s
19 include_timestamp true
20 index_name ${tag_parts[0]}
21 </store>
22 <store>
23 @type stdout
24 </store>
25 <buffer tag>
26 @type memory # or file
27 flush_thread_count 4
28 </buffer>
29</match>

In this config you can remove user and password if you are not using opendistro images and change your hosts . Now run the docker compose file by this command.

1docker-compose up -d

STEP 4:-  Finally EFK stack is ready now lauch your application and send the logs into Elasticsearch. Here i am using nginx and attached the logging tag

1version: "3"
2
3services:
4 nginx:
5 image: nginx
6 container_name: nginx
7 restart: always
8 ports:
9 - 80:80
10 logging:
11 driver: "fluentd"
12 options:
13 fluentd-address: 192.45.34.34:24224
14 tag: fluent

In this config use your fluentd-address and give the tag name for kibana index pattern.

STEP 5:-  Now Confirm Logs from Kibana Dashboard  so go to http://localhost:5601/ with your browser. Then, you need to set up the index name pattern for Kibana. Please specify fluent* to  Index name or pattern and press Create button

Here you can see that your index pattern created and now you can see your application logs by going to discover section

Reference links:- https://docs.fluentd.org/container-deployment/docker-compose