
In this article, we will see how to collect Docker logs to EFK (Elasticsearch + Fluentd + Kibana) stack. The example uses Docker Compose for setting up multiple containers.
But before that let us understand that what is Elasticsearch, Fluentd, and kibana.
1. Elasticsearch :- Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
2. Kibana:- Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data
3. Fluentd:- Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.
How to setup EFK stack Step by Step :-
STEP 1:- First of all create a docker-compose.yaml file for EFK stack. In this demo here we are using Opendistro docker imagesfor security , but you can use official image.
1version: "3"23services:4 elasticsearch:5 image: amazon/opendistro-for-elasticsearch:1.3.06 container_name: elasticsearch7 restart: always8 environment:9 - cluster.name=elasticsearch10 - node.name=elasticsearch11 - discovery.seed_hosts=elasticsearch12 - cluster.initial_master_nodes=elasticsearch13 - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping14 - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM15 - opendistro_security.ssl.http.enabled=false16 ulimits:17 memlock:18 soft: -119 hard: -120 nofile:21 soft: 262144 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems22 hard: 26214423 volumes:24 - elasticsearch:/usr/share/elasticsearch/data25 ports:26 - 9200:920027 - 9600:9600 # required for Performance Analyzer28 networks:29 - traefik-net30 kibana:31 image: yashlogic/amazon-opendistro-for-elasticsearch-kibana-logtrail:1.3.032 container_name: kibana33 restart: always34 ports:35 - 5601:560136 expose:37 - "5601"38 environment:39 ELASTICSEARCH_URL: http://elasticsearch:920040 ELASTICSEARCH_HOSTS: http://elasticsearch:920041 networks:42 - traefik-net43 fluentd:44 build: ./fluentd45 volumes:46 - ./fluentd/conf:/fluentd/etc47 links:48 - "elasticsearch"49 restart: always50 container_name: fluentd51 ports:52 - "24224:24224"53 - "24224:24224/udp"54 networks:55 - traefik-net5657volumes:58 elasticsearch:5960networks:61 traefik-net:
STEP 2:- Then create a folder name called fluentd and in that folder create
Dockerfile . it looks like /fluend/Dockerfile
1# fluentd/Dockerfile2FROM fluent/fluentd:v1.6-debian-13USER root4RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "3.5.2"]5USER fluent
STEP 3:- After that create a folder conf also create a fluent.conf file inside the fluentd directory. it looks like /fluend/conf/fluent.conf
1# fluentd/conf/fluent.conf23<source>4 @type forward5 port 242246 bind 0.0.0.07</source>8<match *.**>9 @type copy10 <store>11 @type elasticsearch_dynamic12 hosts elasticsearch:920013 user admin14 password admin15 include_tag_key true16 type_name access_log17 tag_key @log_name18 flush_interval 10s19 include_timestamp true20 index_name ${tag_parts[0]}21 </store>22 <store>23 @type stdout24 </store>25 <buffer tag>26 @type memory # or file27 flush_thread_count 428 </buffer>29</match>
In this config you can remove user and password if you are not using opendistro images and change your hosts . Now run the docker compose file by this command.
1docker-compose up -d
STEP 4:- Finally EFK stack is ready now lauch your application and send the logs into Elasticsearch. Here i am using nginx and attached the logging tag
1version: "3"23services:4 nginx:5 image: nginx6 container_name: nginx7 restart: always8 ports:9 - 80:8010 logging:11 driver: "fluentd"12 options:13 fluentd-address: 192.45.34.34:2422414 tag: fluent
In this config use your fluentd-address and give the tag name for kibana index pattern.
STEP 5:- Now Confirm Logs from Kibana Dashboard so go to http://localhost:5601/ with your browser. Then, you need to set up the index name pattern for Kibana. Please specify fluent* to Index name or pattern and press Create button

Here you can see that your index pattern created and now you can see your application logs by going to discover section

Reference links:- https://docs.fluentd.org/container-deployment/docker-compose



