Saturday 10 June 2017

​Configure ELK stack for centralized log management

I had a two node centOS7 machine on which I had configured Elasticsearch, Logstash, Kibana(ELK) on the second node configured filebeat to send all logs to logstash. 

Hostnames : node1 & node2
Environment : CentOS 7.3
RPM versions : elasticsearch/kibana/logstash/filebeat - 5.4.1

Minimum requirement: ensure your java package are installed and you have sufficient memory on the elasticserver based on the clients which you are trying to configure..

Would install (elastic/logstash/kibana) on all in a single node(node1) and client(node2) would forward all the logs in /var/log/* to logstash on the node1.

Lets start ....

node1:

Download the RPM and install using yum, kibana is a tarball and shall extract to a directory.


Install Elasticsearch:

[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@node1 ~]# systemctl start elasticsearch.service
[root@node1 ~]# systemctl status elasticsearch.service

Install kibana:

[root@node1 ~]# tar -xzvf kibana-5.4.1-linux-x86_64.tar.gz -C /usr/local
[root@node1 ~]# cd /usr/local/
[root@node1 ~]# mv kiba* kibana

Make it to start service when system boots

[root@node1 ~]# vim /etc/systemd/system/kibana.service
[Service]
ExecStart=/usr/local/kibana/bin/kibana

[Install]
WantedBy=multi-user.target

[root@node1 system]# systemctl enable kibana.service
[root@node1 system]# systemctl start kibana.service
[root@node1 system]# systemctl status kibana.service

Now you could point your ip:5601 on your browser to get your kibana dashboard.

Install Logstash:

[root@node1 ~]# cat /etc/logstash/conf.d/logstash.conf
input {
   beats {
    port => 5044
    type => "logs"
  }
}

filter {
  if [type] == "syslog" {
   grok {
     match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"}
     add_field => [ "received_at", "%{@timestamp}" ]
     add_field => [ "received_from", "%{hosts}" ]
    }
    syslog_pri { }
    date {
     match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
   }
}

output {
  elasticsearch {hosts => localhost}
  stdout { codec => rubydebug }
}
[root@node1 ~]#

[root@node1 ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@node1 ~]# systemctl start logstash.service
[root@node1 ~]# systemctl status logstash.service

node2:

Install filebeat:

Download and install filebeat RPM


[root@node2 ~]# yum localinstall filebeat-5.4.1-x86_64.rpm
[root@node2 ~]# systemctl enable filebeat.service
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.
[root@node2 ~]# systemctl start filebeat.service
[root@node2 ~]# systemctl status filebeat.service

Make changes to your configuration logstash so that it would sent to your logstash server.

[root@node2 ~]# vim /etc/filebeat/filebeat.yml
 91 output.logstash:
 92   # The Logstash hosts
 93   hosts: ["192.168.122.100:5044"]

Error or Info: Logs you need to check if something goes wrong 

node1:

elasticsearch:

[root@node1 ~]# cat /var/log/elasticsearch/elasticsearch.log

[2017-06-09T18:11:34,760][INFO ][o.e.n.Node               ] [] initializing ...
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.4gb], net total_space [6.1gb], spins? [unknown], types [rootfs]
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-09T18:11:34,976][INFO ][o.e.n.Node               ] node name [sfAWP7D] derived from node ID [sfAWP7DYQpiQTVtuRd0DFw]; set [node.name] to override
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] version[5.4.1], pid[2966], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-06-09T18:11:37,704][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [aggs-matrix-stats]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [ingest-common]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-expression]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-groovy]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-mustache]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-painless]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [percolator]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [reindex]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty3]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty4]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] no plugins loaded
[2017-06-09T18:11:42,393][INFO ][o.e.d.DiscoveryModule    ] [sfAWP7D] using discovery type [zen]
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] initialized
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] [sfAWP7D] starting ...
[2017-06-09T18:11:44,190][INFO ][o.e.t.TransportService   ] [sfAWP7D] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-06-09T18:11:47,411][INFO ][o.e.c.s.ClusterService   ] [sfAWP7D] new_master {sfAWP7D}{sfAWP7DYQpiQTVtuRd0DFw}{CNOzn0gIRBqP_pVHl-xusQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)

logstash:


[root@node1 ~]# cat /var/log/logstash/logstash-plain.log
[2017-06-09T20:44:22,105][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2017-06-09T20:44:22,135][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-06-09T20:44:22,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x554cf8d1 URL:http://localhost:9200/>}
[2017-06-09T20:44:22,422][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-09T20:44:22,567][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-06-09T20:44:22,599][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x74f64db6 URL://localhost>]}
[2017-06-09T20:44:22,813][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-06-09T20:44:23,857][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-06-09T20:44:23,978][INFO ][logstash.pipeline        ] Pipeline main started
[2017-06-09T20:44:24,112][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

node2:

filebeat:

[root@node2 ~]#head -100 /var/log/filebeat/filebeat
2017-06-09T20:37:53+05:30 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-06-09T20:37:53+05:30 INFO Setup Beat: filebeat; Version: 5.4.1
2017-06-09T20:37:53+05:30 INFO Max Retries set to: 3
2017-06-09T20:37:53+05:30 INFO Activated logstash as output plugin.
2017-06-09T20:37:53+05:30 INFO Publisher name: cen02.elktest.com
2017-06-09T20:37:53+05:30 INFO Flush Interval set to: 1s
2017-06-09T20:37:53+05:30 INFO Max Bulk Size set to: 2048
2017-06-09T20:37:53+05:30 INFO filebeat start running.
2017-06-09T20:37:53+05:30 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-06-09T20:37:53+05:30 INFO Metrics logging every 30s
2017-06-09T20:37:53+05:30 INFO Loading registrar data from /var/lib/filebeat/registry
2017-06-09T20:37:53+05:30 INFO States Loaded from registrar: 0
2017-06-09T20:37:53+05:30 INFO Loading Prospectors: 1
2017-06-09T20:37:53+05:30 INFO Prospector with previous states loaded: 0
2017-06-09T20:37:53+05:30 INFO Starting prospector of type: log; id: 17005676086519951868
2017-06-09T20:37:53+05:30 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-06-09T20:37:53+05:30 INFO Starting Registrar
2017-06-09T20:37:53+05:30 INFO Start sending events to output
2017-06-09T20:37:53+05:30 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/wpa_supplicant.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/yum.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions-uninstall.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/Xorg.0.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/boot.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/vboxadd-install.log



Thanks