Skip to content
This repository has been archived by the owner on Dec 29, 2018. It is now read-only.
/ compose-elk Public archive

The Elastic Stack powered by Docker and Compose.

License

Notifications You must be signed in to change notification settings

khezen/compose-elk

Repository files navigation

What is the Elastic Stack?

By combining the massively popular Elasticsearch, Logstash, and Kibana, Elastic has created an end-to-end stack that delivers actionable insights in real time from almost any type of structured and unstructured data source. Built and supported by the engineers behind each of these open source products, the Elastic Stack makes searching and analyzing data easier than ever before.

Setup

Install Docker

  1. Docker engine
  2. Docker compose
  3. Clone this repository: git clone https://github.com/khezen/docker-elk

run the following command on your host:

sysctl -w vm.max_map_count=262144

You can set it permanently by modifying vm.max_map_count setting in your /etc/sysctl.conf.

Usage

Start the Elastic Stack using docker-compose:

$ docker-compose up

You can also choose to run it in background:

$ docker-compose up -d

Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp or udp:

$ nc localhost 5000 < ./logstash-init.log

And then access Kibana by hitting http://localhost:5601 with a web browser.

WARNING: If you're using boot2docker, or Docker Toolbox you must access it via the boot2docker IP address instead of localhost.

NOTE: You need to inject data into logstash before being able to create a logstash index in Kibana. Then all you should have to do is to hit the create button.

By Default, The Elastic Stack exposes the following ports:

  • 5000: Logstash TCP input.
  • 9200: Elasticsearch HTTP
  • 9300: Elasticsearch TCP transport
  • 5601: Kibana

Docker Swarm

Deploy the Elastic Stack on your cluster using docker swarm:

  1. Connect to a manager node of the swarm
  2. git clone https://github.com/khezen/docker-elk
  3. cd docker-elk
  4. docker stack deploy -c swarm-stack.yml elk

The number of replicas for each services can be edited from swarm-stack.yml:

...
deploy:
      mode: replicated
      replicas: 2
...

Services are load balanced using HAProxy.

Elasticsearch

Configuration file is located in /etc/elasticsearch/elasticsearch.yml.

You can find default config there.

You can find help with elasticsearch configuration there.

You can edit docker-compose.yml to set khezen/elasticsearch environment variables yourself.

elasticsearch:
    image: khezen/elasticsearch
    environment:
        HEAP_SIZE: 1g
        ELASTIC_PWD: changeme
        KIBANA_PWD: changeme
        LOGSTASH_PWD: changeme
        BEATS_PWD: changeme
        ELASTALERT_PWD: changeme
    volumes:
        - /data/elasticsearch:/usr/share/elasticsearch/data
        - /etc/elasticsearch:/usr/share/elasticsearch/config
    ports:
          - "9200:9200"
          - "9300:9300"
    networks:
        - elk
    restart: unless-stopped

Kibana

  • Discover - explore your data,

  • Visualize - create visualizations of your data,

    • You can find exported visualizations under ./visualizations folder,
    • To import them in Kibana, go to Managment->Saved Objects panel,
  • Dashboard - displays a collection of saved visualizations,

    • You can find exported dashboards under ./dashboards folder,
    • To import them in Kibana, go to Managment->Saved Objects panel,
  • Timelion - combine totally independent data sources within a single visualization.

Configuration file is located in /etc/kibana/kibana.yml.

You can find default config there.

You can find help with kibana configuration there.

You can edit docker-compose.yml to set khezen/kibana environment variables yourself.

kibana:
    links:
        - elasticsearch
    image: khezen/kibana
    environment:
        KIBANA_PWD: changeme
        ELASTICSEARCH_HOST: elasticsearch
        ELASTICSEARCH_PORT: 9200
    volumes:
        - /etc/kibana:/etc/kibana
        - /etc/elasticsearch/searchguard/ssl:/etc/searchguard/ssl
    ports:
          - "5601:5601"
    networks:
        - elk
    restart: unless-stopped

logstash

Configuration file is located in /etc/logstash/logstash.conf.

You can find default config there.

NOTE: It is possible to use environment variables in logstash.conf.

You can find help with logstash configuration there.

You can edit docker-compose.yml to set khezen/logstash environment variables yourself.

logstash:
    links:
        - elasticsearch
    image: khezen/logstash
    environment:
        HEAP_SIZE: 1g
        LOGSTASH_PWD: changeme
        ELASTICSEARCH_HOST: elasticsearch
        ELASTICSEARCH_PORT: 9200    
    volumes:
        - /etc/logstash:/etc/logstash/conf.d
        - /etc/elasticsearch/searchguard/ssl:/etc/elasticsearch/searchguard/ssl
    ports:
          - "5000:5000"
          - "5001:5001"
    networks:
        - elk
    restart: unless-stopped

Beats

The Beats are open source data shippers that you install as agents on your servers to send different types of operational data to Elasticsearch

any beat

You need to provide elasticsearch host:port and credentials for beats user in the configuration file:

output.elasticsearch:
  hosts: ["<ELASTICSEARCH_HOST>:<ELASTICSEARCH_PORT>"]
  index: "packetbeat"
  user: beats
  password: <BEATS_PWD>

metricbeat

You can find help with metricbeat installation here.

Configuration file is located in /etc/metricbeat/metricbeat.yml.

You can find help with metricbeat configuration here.

start with sudo /etc/init.d/metricbeat start

filebeat

You can find help with filebeat installation here.

Configuration file is located in /etc/filebeat/filebeat.yml.

You can find help with filebeat configuration here.

start with sudo /etc/init.d/filebeat start

packetbeat

You can find help with packetbeat installation here.

Configuration file is located in /etc/packetbeat/packetbeat.yml.

You can find help with packetbeat configuration here.

start with sudo /etc/init.d/packetbeat start

Elastalert

What is Elastalert?

ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. It is a nice replacement of the Watcher module if your are not willing to pay the x-pack subscription and still needs some alerting features.

Configuration

Configuration file is located in /etc/elastalert/elastalert.yml.

You can find help with elastalert configuration here.

You can share rules from host to the container by adding them to /usr/share/elastalert/rules

User Feedback

Issues

If you have any problems with or questions about this project, please ask for help through a GitHub issue.

Releases

No releases published

Packages

No packages published