Saturday, 9 September 2017

Configure Mail Server & Setting up Mail Client: Fedora Linux 25

​Here, I would let you know how to configure mail server(MTA) and client(MUA) using evolution which is by default.

Environment: Fedora 25

​we will understand few components before we setup email configurations/settings

MUA(Mail User Agent) or Mail Client: Application used to write/send/read email messages.
e.g Evolution, KMail, Outlook etc.. text based mail clients like pine, mail ..etc 

MTA(Mail Transfer Agent):Transferring email messages from one computer to another(intranet or Internet). We would be configuring postfix in this section

MDA(Mail Delivery Agent): It will receive emails from the MTA and delivers them to relevant mailbox MUA(e.g Dovecot). There are few of the popular MDA which would remove unwanted email messages or spam before they reach MUA Inbox.(e.g Procmail ..etc )

SMTP(Simple Mail Transfer Protocol): communicates language that the MTA use to talk to each other and transfer message back and forth.

Architecture


Configuring MTA

Login as 'root' to perform below steps.
Note: SElinux was disabled.

- install postfix.
#dnf install postfix -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/postfix/main.cf /etc/postfix/main.cf.original

# cat /etc/postfix/main.cf
     1  compatibility_level = 2
     2  queue_directory = /var/spool/postfix
     3  command_directory = /usr/sbin
     4  daemon_directory = /usr/libexec/postfix
     5  data_directory = /var/lib/postfix
     6  mail_owner = postfix
     7  myhostname = fedora.localhost.com
     8  mydomain = localhost.com
     9  myorigin = $mydomain
    10  inet_interfaces = $myhostname
    11  inet_protocols = ipv4
    12  mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
    13  unknown_local_recipient_reject_code = 550
    14  mynetworks = 192.168.122.0/24, 127.0.0.0/8, 10.0.0.0/24
    15  alias_maps = hash:/etc/aliases
    16  alias_database = hash:/etc/aliases
    17  smtpd_banner = $myhostname ESMTP
    18  debug_peer_level = 2
    19  debugger_command =
    20           PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
    21           ddd $daemon_directory/$process_name $process_id & sleep 5
    22  sendmail_path = /usr/sbin/sendmail.postfix
    23  newaliases_path = /usr/bin/newaliases.postfix
    24  mailq_path = /usr/bin/mailq.postfix
    25  setgid_group = postdrop
    26  html_directory = no
    27  manpage_directory = /usr/share/man
    28  sample_directory = /usr/share/doc/postfix/samples
    29  readme_directory = /usr/share/doc/postfix/README_FILES
    30  meta_directory = /etc/postfix
    31  shlib_directory = /usr/lib64/postfix
    32  message_size_limit = 10485760
    33  mailbox_size_limit = 1073741824
    34  smtpd_sasl_type = dovecot
    35  smtpd_sasl_path = private/auth
    36  smtpd_sasl_auth_enable = yes
    37  smtpd_sasl_security_options = noanonymous
    38  smtpd_sasl_local_domain = $myhostname
    39  smtpd_recipient_restrictions = permit_mynetworks,permit_auth_destination,permit_sasl_authenticated,reject

- start service persistent after reboot 
# systemctl start postfix
# systemctl enable postfix

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service=smtp --permanent
#firewall-cmd --reload

Configuring MDA

- install dovecot
#dnf install dovecot -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.original
#mv /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.original
#mv /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.original
#mv /etc/dovecot/conf.d/10-master.conf /etc/dovecot/conf.d/10-master.conf.original
#mv /etc/dovecot/conf.d/10-ssl.conf /etc/dovecot/conf.d/10-ssl.conf.original

# cat /etc/dovecot/dovecot.conf 
     1  protocols = imap pop3 lmtp
     2  listen = *,::
     3  dict {
     4  }
     5  !include conf.d/*.conf
     6  !include_try local.conf
#

# cat /etc/dovecot/conf.d/10-auth.conf
     1  disable_plaintext_auth = no
     2  auth_mechanisms = plain login
     3  !include auth-system.conf.ext
#

# cat /etc/dovecot/conf.d/10-mail.conf

     1  mail_location = maildir:~/Maildir
     2  namespace inbox {
     3    inbox = yes
     4  }
     5  protocol !indexer-worker {
     6  }
     7  mbox_write_locks = fcntl
#

# cat /etc/dovecot/conf.d/10-master.conf
     1   service imap-login {
     2    inet_listener imap {
     3    }
     4    inet_listener imaps {
     5    }
     6  }
     7  service pop3-login {
     8    inet_listener pop3 {
     9    }
    10    inet_listener pop3s {
    11    }
    12  }
    13  service lmtp {
    14    unix_listener lmtp {
    15    }
    16  }
    17  service imap {
    18  }
    19  service pop3 {
    20  }
    21  service auth {
    22    unix_listener auth-userdb {
    23  }
    24    unix_listener /var/spool/postfix/private/auth {
    25      mode = 0666
    26      user = postfix
    27      group = postfix
    28    }
    29  }
    30  service auth-worker {
    31  }
    32  service dict {
    33    unix_listener dict {
    34    }
    35  }
#

# cat /etc/dovecot/conf.d/10-ssl.conf
     1  ssl = required
     2  ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
     3  ssl_key = </etc/pki/dovecot/private/dovecot.pem
     4  ssl_cipher_list = PROFILE=SYSTEM
#

- start service persistent after reboot 
# systemctl start dovecot
# systemctl enable dovecot

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service={pop3,imap} --permanent
#firewall-cmd --reload

Configure MUA

Click on 'Evolution' and configure as per below ...

Edit -> Preferences -> Add ->Next  





Leave rest to default and continue to check [OK] .

Testing

Compose email and read yourself :)




Thanks

Saturday, 26 August 2017

Installation of Jenkins in Linux

In this blog, I would just summarize installation of jenkins in few steps

1. Jenkins is Java based application, hence install Java version 8
2. Tomcat is required to deploy Jenkins war file, hence install Apache Tomcat version 9
3. Download Jenkins war file
4. Deploy Jenkins war file using Tomcat 
5. Install suggested plugins

Environment: CentOS / Redhat

- Install latest Java version

#yum install java-1.8.0-openjdk

- Download Apache Tomcat 


#tar -xzvf apache-tomcat-9.0.0.M10.tar.gz

#mv apache-tomcat-9.0.0.M10 apache9

- Provide username and password for Apache Tomcat

#mv apache9/conf/tomcat-users.xml apache9/conf/tomcat-users.xml.original
#vim apache9/conf/tomcat-users.xml


<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<tomcat-users xmlns="http://tomcat.apache.org/xml"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
              version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<role rolename="admin-gui"/>
<role rolename="admin-script"/>
<user username="
​jenkins
" password="
​jenkins
" roles="manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script"/>
</tomcat-users>


#cd apache9/bin
#./startup.sh

[user@localhost bin]$ ./startup.sh
Using CATALINA_BASE:   /home/vagrant/tomcat9
Using CATALINA_HOME:   /home/vagrant/tomcat9
Using CATALINA_TMPDIR: /home/vagrant/tomcat9/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /home/vagrant/tomcat9/bin/bootstrap.jar:/home/vagrant/tomcat9/bin/tomcat-juli.jar
Tomcat started.
[user@localhost bin]$

- Open your local browser and point your URL to http://localhost:8080 to display Apache Tomcat


In-order to deploy jenkins war file click on "Manager App" , provide credentials written in tomcat-users.xml
In deploy section write your context path and WAR file to be deloyed


Once above is completed, it will be deployed and in the Application and click on Jenkins, you need to unlock Jenkins by providing password located in your home directory. 
#cat /home/user/.jenkins/secret/initialAdminPassword


Once they are unlocked, install the suggested plugins and get started. Create your First Admin user and Jenkins is ready 



Saturday, 10 June 2017

​Configure ELK stack for centralized log management

I had a two node centOS7 machine on which I had configured Elasticsearch, Logstash, Kibana(ELK) on the second node configured filebeat to send all logs to logstash. 

Hostnames : node1 & node2
Environment : CentOS 7.3
RPM versions : elasticsearch/kibana/logstash/filebeat - 5.4.1

Minimum requirement: ensure your java package are installed and you have sufficient memory on the elasticserver based on the clients which you are trying to configure..

Would install (elastic/logstash/kibana) on all in a single node(node1) and client(node2) would forward all the logs in /var/log/* to logstash on the node1.

Lets start ....

node1:

Download the RPM and install using yum, kibana is a tarball and shall extract to a directory.


Install Elasticsearch:

[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@node1 ~]# systemctl start elasticsearch.service
[root@node1 ~]# systemctl status elasticsearch.service

Install kibana:

[root@node1 ~]# tar -xzvf kibana-5.4.1-linux-x86_64.tar.gz -C /usr/local
[root@node1 ~]# cd /usr/local/
[root@node1 ~]# mv kiba* kibana

Make it to start service when system boots

[root@node1 ~]# vim /etc/systemd/system/kibana.service
[Service]
ExecStart=/usr/local/kibana/bin/kibana

[Install]
WantedBy=multi-user.target

[root@node1 system]# systemctl enable kibana.service
[root@node1 system]# systemctl start kibana.service
[root@node1 system]# systemctl status kibana.service

Now you could point your ip:5601 on your browser to get your kibana dashboard.

Install Logstash:

[root@node1 ~]# cat /etc/logstash/conf.d/logstash.conf
input {
   beats {
    port => 5044
    type => "logs"
  }
}

filter {
  if [type] == "syslog" {
   grok {
     match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"}
     add_field => [ "received_at", "%{@timestamp}" ]
     add_field => [ "received_from", "%{hosts}" ]
    }
    syslog_pri { }
    date {
     match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
   }
}

output {
  elasticsearch {hosts => localhost}
  stdout { codec => rubydebug }
}
[root@node1 ~]#

[root@node1 ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@node1 ~]# systemctl start logstash.service
[root@node1 ~]# systemctl status logstash.service

node2:

Install filebeat:

Download and install filebeat RPM


[root@node2 ~]# yum localinstall filebeat-5.4.1-x86_64.rpm
[root@node2 ~]# systemctl enable filebeat.service
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.
[root@node2 ~]# systemctl start filebeat.service
[root@node2 ~]# systemctl status filebeat.service

Make changes to your configuration logstash so that it would sent to your logstash server.

[root@node2 ~]# vim /etc/filebeat/filebeat.yml
 91 output.logstash:
 92   # The Logstash hosts
 93   hosts: ["192.168.122.100:5044"]

Error or Info: Logs you need to check if something goes wrong 

node1:

elasticsearch:

[root@node1 ~]# cat /var/log/elasticsearch/elasticsearch.log

[2017-06-09T18:11:34,760][INFO ][o.e.n.Node               ] [] initializing ...
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.4gb], net total_space [6.1gb], spins? [unknown], types [rootfs]
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-09T18:11:34,976][INFO ][o.e.n.Node               ] node name [sfAWP7D] derived from node ID [sfAWP7DYQpiQTVtuRd0DFw]; set [node.name] to override
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] version[5.4.1], pid[2966], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-06-09T18:11:37,704][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [aggs-matrix-stats]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [ingest-common]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-expression]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-groovy]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-mustache]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-painless]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [percolator]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [reindex]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty3]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty4]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] no plugins loaded
[2017-06-09T18:11:42,393][INFO ][o.e.d.DiscoveryModule    ] [sfAWP7D] using discovery type [zen]
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] initialized
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] [sfAWP7D] starting ...
[2017-06-09T18:11:44,190][INFO ][o.e.t.TransportService   ] [sfAWP7D] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-06-09T18:11:47,411][INFO ][o.e.c.s.ClusterService   ] [sfAWP7D] new_master {sfAWP7D}{sfAWP7DYQpiQTVtuRd0DFw}{CNOzn0gIRBqP_pVHl-xusQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)

logstash:


[root@node1 ~]# cat /var/log/logstash/logstash-plain.log
[2017-06-09T20:44:22,105][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2017-06-09T20:44:22,135][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-06-09T20:44:22,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x554cf8d1 URL:http://localhost:9200/>}
[2017-06-09T20:44:22,422][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-09T20:44:22,567][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-06-09T20:44:22,599][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x74f64db6 URL://localhost>]}
[2017-06-09T20:44:22,813][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-06-09T20:44:23,857][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-06-09T20:44:23,978][INFO ][logstash.pipeline        ] Pipeline main started
[2017-06-09T20:44:24,112][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

node2:

filebeat:

[root@node2 ~]#head -100 /var/log/filebeat/filebeat
2017-06-09T20:37:53+05:30 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-06-09T20:37:53+05:30 INFO Setup Beat: filebeat; Version: 5.4.1
2017-06-09T20:37:53+05:30 INFO Max Retries set to: 3
2017-06-09T20:37:53+05:30 INFO Activated logstash as output plugin.
2017-06-09T20:37:53+05:30 INFO Publisher name: cen02.elktest.com
2017-06-09T20:37:53+05:30 INFO Flush Interval set to: 1s
2017-06-09T20:37:53+05:30 INFO Max Bulk Size set to: 2048
2017-06-09T20:37:53+05:30 INFO filebeat start running.
2017-06-09T20:37:53+05:30 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-06-09T20:37:53+05:30 INFO Metrics logging every 30s
2017-06-09T20:37:53+05:30 INFO Loading registrar data from /var/lib/filebeat/registry
2017-06-09T20:37:53+05:30 INFO States Loaded from registrar: 0
2017-06-09T20:37:53+05:30 INFO Loading Prospectors: 1
2017-06-09T20:37:53+05:30 INFO Prospector with previous states loaded: 0
2017-06-09T20:37:53+05:30 INFO Starting prospector of type: log; id: 17005676086519951868
2017-06-09T20:37:53+05:30 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-06-09T20:37:53+05:30 INFO Starting Registrar
2017-06-09T20:37:53+05:30 INFO Start sending events to output
2017-06-09T20:37:53+05:30 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/wpa_supplicant.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/yum.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions-uninstall.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/Xorg.0.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/boot.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/vboxadd-install.log



Thanks

Monday, 8 May 2017

Configure ELK using dockers

I would wish to keep services in three different containers and will try to link to each containers to access ELK stack. 
Installation is very easy .. we will pull docker images and run the containers.

Elasticsearch: 

$sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 elasticsearch
9c7d52445691015e21a7007e35aca935b9a0dcbbd9560f170cc07c8adc08ae63

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9c7d52445691        elasticsearch       "/docker-entrypoint.s"   30 seconds ago      Up 30 seconds       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch
$

test your configurations 

{
  "name" : "PIvLNU_",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Rx7oxSxESvqo-8GNXkDzCA",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

Kibana: visualization tool which connects to elasticsearch

since elasticsearch is already running, we need to point kibana container to elasticsearch container.  

$sudo docker run --name kibana -d -p 5601:5601 --link elasticsearch:elasticsearch kibana
2090b8df39a44016e401e8b2fb4e2d79f4d674e9f02ae5794f1ed484fb28913e

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2090b8df39a4        kibana              "/docker-entrypoint.s"   7 seconds ago       Up 4 seconds        0.0.0.0:5601->5601/tcp   kibana

<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;
}</script>[sunlnx@fedora ~]$

Point your browser to http://localhost:5601 which would re-direct to kibana default index page..



Logstash:

we would try to take standard input and would it has to be reflected on the kibana dashboard.
create a configuration file for syslog and would start container. 

$cd $PWD/logstash/
$cat logstash.conf

# input from keyboard
input{
stdin {}
}

#output to elasticcontainer
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

we are referencing elasticsearch from container and we will link these two containers together. 

$sudo docker run -it --rm --name logstash --link elasticsearch:elasticsearch -v $PWD:/config logstash -f /config/logstash.conf

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10:41:27.928 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
10:41:28.003 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"b359fa87-aef2-4cb1-a533-84767399a0f7", :path=>"/var/lib/logstash/uuid"}
10:41:29.340 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs u
.
.
<snip>
this is test1
this is test2
this is test3


logstash would trigger and pushed all messages to elasticsearch, where elasticsearch would created an index page. refresh kibana dashboard, you would now see an index page, click on 'create' and proceed next.

Click on 'discover', you would now see all the messages being taken from keyboard.





Example 2: 

you can also create your own file for port forward TCP and map to your localhost and try to see telnet. you must be able to see the logs in the kibana dashboard. 

$cat portfw.conf

input{
tcp {
   port => 9500
}
}

output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

$telnet localhost 9500
<type your message> 

I hope you would find this easy for you to configure ELK stack. you could find more reference on the logstash configuration examples from "https://www.elastic.co/guide/en/logstash/current/config-examples.html

Thanks