There is a new version of this tutorial available for CentOS 8.

How to Install Elastic Stack on CentOS 7

Elasticsearch is an open source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana). The data is queried, retrieved and stored with a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files. Elasticsearch is the heart of the 'Elastic Stack' or ELK Stack.

Logstash is an open source tool for managing events and logs. It provides real-time pipelining for data collections. Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.

Kibana is an open source data visualization tool for Elasticsearch. Kibana provides a pretty dashboard web interface. It allows you to manage and visualize data from Elasticsearch. It's not just beautiful, but also powerful.

In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server for monitoring server logs. Then I'll show you how to install 'Elastic beats' on a CentOS 7 and an Ubuntu 16.04 client operating system.

Prerequisites

  • CentOS 7 64 bit with 4GB of RAM - elk-master
  • CentOS 7 64 bit with 1 GB of RAM - client1
  • Ubuntu 16.04 64 bit with 1GB of RAM - client2

Step 1 - Prepare the Operating System

In this tutorial, we will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.

vim /etc/sysconfig/selinux

Change SELinux value from enforcing to disabled.

SELINUX=disabled

Then reboot the server.

reboot

Log in to the server again and check the SELinux state.

getenforce

Make sure the result is disabled.

Step 2 - Install Java

Java is required for the Elastic stack deployment. Elasticsearch requires Java 8, it is recommended to use the Oracle JDK 1.8. I will install Java 8 from the official Oracle rpm package.

Download Java 8 JDK with the wget command.

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"

Then install it with this rpm command;

rpm -ivh jdk-8u77-linux-x64.rpm

Finally, check java JDK version to ensure that it is working properly.

java -version

You will see Java version of the server.

Step 3 - Install and Configure Elasticsearch

In this step, we will install and configure Elasticsearch. I will install Elasticsearch from an rpm package provided by elastic.co and configure it to run on localhost (to make the setup secure and ensure that it is not reachable from the outside).

Before installing Elasticsearch, add the elastic.co key to the server.

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, download Elasticsearch 5.1 with wget and then install it.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
rpm -ivh elasticsearch-5.1.1.rpm

Elasticsearch is installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.

cd /etc/elasticsearch/
vim elasticsearch.yml

Enable memory lock for Elasticsearch by removing a comment on line 40. This disables memory swapping for Elasticsearch.

bootstrap.memory_lock: true

In the 'Network' block, uncomment the network.host and http.port lines.

network.host: localhost
http.port: 9200

Save the file and exit the editor.

Now edit the elasticsearch.service file for the memory lock configuration.

vim /usr/lib/systemd/system/elasticsearch.service

Uncomment LimitMEMLOCK line.

LimitMEMLOCK=infinity

Save and exit.

Edit the sysconfig configuration file for Elasticsearch.

vim /etc/sysconfig/elasticsearch

Uncomment line 60 and make sure the value is 'unlimited'.

MAX_LOCKED_MEMORY=unlimited

Save and exit.

The Elasticsearch configuration is finished. Elasticsearch will run on the localhost IP address on port 9200, we disabled memory swapping for it by enabling mlockall on the CentOS server.

Reload systemd, enable Elasticsearch to start at boot time, then start the service.

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

Wait a second for Eelasticsearch to start, then check the open ports on the server, make sure 'state' for port 9200 is 'LISTEN'.

netstat -plntu

Check elasticsearch running on port 9200

Then check the memory lock to ensure that mlockall is enabled, and check that Elasticsearch is running with the commands below.

curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
curl -XGET 'localhost:9200/?pretty'

You will see the results below.

Check memory lock elasticsearch and check status

Step 4 - Install and Configure Kibana with Nginx

In this step, we will install and configure Kibana with a Nginx web server. Kibana will listen on the localhost IP address and Nginx acts as a reverse proxy for the Kibana application.

Download Kibana 5.1 with wget, then install it with the rpm command:

wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
rpm -ivh kibana-5.1.1-x86_64.rpm

Now edit the Kibana configuration file.

vim /etc/kibana/kibana.yml

Uncomment the configuration lines for server.port, server.host and elasticsearch.url.

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save and exit.

Add Kibana to run at boot and start it.

sudo systemctl enable kibana
sudo systemctl start kibana

Kibana will run on port 5601 as node application.

netstat -plntu

Kibana running as node application on port 5601

The Kibana installation is finished. Now we need to install Nginx and configure it as reverse proxy to be able to access Kibana from the public IP address.

Nginx is available in the Epel repository, install epel-release with yum.

yum -y install epel-release

Next, install the Nginx and httpd-tools package.

yum -y install nginx httpd-tools

The httpd-tools package contains tools for the web server, we will use htpasswd basic authentication for Kibana.

Edit the Nginx configuration file and remove the 'server { }' block, so we can add a new virtual host configuration.

cd /etc/nginx/
vim nginx.conf

Remove the server { } block.

Remove Server Block on Nginx configuration

Save the file and exit vim.

Now we need to create a new virtual host configuration file in the conf.d directory. Create the new file 'kibana.conf' with vim.

vim /etc/nginx/conf.d/kibana.conf

Paste the configuration below.

server {
    listen 80;

    server_name elk-stack.co;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Save and exit.

Then create a new basic authentication file with the htpasswd command.

sudo htpasswd -c /etc/nginx/.kibana-user admin
TYPE YOUR PASSWORD

Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.

nginx -t
systemctl enable nginx
systemctl start nginx

Add nginx virtual host configuration for Kibana Application

Step 5 - Install and Configure Logstash

In this step, we will install Logsatash and configure it to centralize server logs from clients with filebeat, then filter and transform the Syslog data and move it into the stash (Elasticsearch).

Download Logstash and install it with rpm.

wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
rpm -ivh logstash-5.1.1.rpm

Generate a new SSL certificate file so that the client can identify the elastic server.

Go to the tls directory and edit the openssl.cnf file.

cd /etc/pki/tls
vim openssl.cnf

Add a new line in the '[ v3_ca ]' section for the server identification.

[ v3_ca ]

# Server IP Address
subjectAltName = IP: 10.0.15.10

Save and exit.

Generate the certificate file with the openssl command.

openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

The certificate files can be found in the '/etc/pki/tls/certs/' and '/etc/pki/tls/private/' directories.

Next, we will create new configuration files for Logstash. We will create a new 'filebeat-input.conf' file to configure the log sources for filebeat, then a 'syslog-filter.conf' file for syslog processing and the 'output-elasticsearch.conf' file to define the Elasticsearch output.

Go to the logstash configuration directory and create the new configuration files in the 'conf.d' subdirectory.

cd /etc/logstash/
vim conf.d/filebeat-input.conf

Input configuration: paste the configuration below.

input {
  beats {
    port => 5443
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Save and exit.

Create the syslog-filter.conf file.

vim conf.d/syslog-filter.conf

Paste the configuration below.

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

We use a filter plugin named 'grok' to parse the syslog files.

Save and exit.

Create the output configuration file 'output-elasticsearch.conf'.

vim conf.d/output-elasticsearch.conf

Paste the configuration below.

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Save and exit.

Finally add logstash to start at boot time and start the service.

sudo systemctl enable logstash
sudo systemctl start logstash

Logstash started on port 5443 with SSL Connection

Step 6 - Install and Configure Filebeat on the CentOS Client

Beats are data shippers, lightweight agents that can be installed on the client nodes to send huge amounts of data from the client machine to the Logstash or Elasticsearch server. There are 4 beats available, 'Filebeat' for 'Log Files', 'Metricbeat' for 'Metrics', 'Packetbeat' for 'Network Data' and 'Winlogbeat' for the Windows client 'Event Log'.

In this tutorial, I will show you how to install and configure 'Filebeat' to transfer data log files to the Logstash server over an SSL connection.

Login to the client1 server. Then copy the certificate file from the elastic server to the client1 server. 

ssh [email protected]

Copy the certificate file with the scp command.

scp [email protected]:~/logstash-forwarder.crt .
TYPE elk-server password

Create a new directory and move certificate file to that directory.

sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/

Next, import the elastic key on the client1 server.

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Download Filebeat and install it with rpm.

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
rpm -ivh filebeat-5.1.1-x86_64.rpm

Filebeat has been installed, go to the configuration directory and edit the file 'filebeat.yml'.

cd /etc/filebeat/
vim filebeat.yml

In the paths section on line 21, add the new log files. We will add two files '/var/log/secure' for ssh activity and '/var/log/messages' for the server log.

  paths:
    - /var/log/secure
    - /var/log/messages

Add a new configuration on line 26 to define the syslog type files.

  document-type: syslog

Filebeat is using Elasticsearch as the output target by default. In this tutorial, we will change it to Logshtash. Disable Elasticsearch output by adding comments on the lines 83 and 85.

Disable elasticsearch output.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Now add the new logstash output configuration. Uncomment the logstash output configuration and change all value to the configuration that is shown below.

output.logstash:
  # The Logstash hosts
  hosts: ["10.0.15.10:5443"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

Save the file and exit vim.

Add Filebeat to start at boot time and start it.

sudo systemctl enable filebeat
sudo systemctl start filebeat

Step 7 - Install and Configure Filebeat on the Ubuntu Client

Connect to the server by ssh.

ssh [email protected]

Copy the certificate file to the client with the scp command.

scp [email protected]:~/logstash-forwarder.crt .

Create a new directory for the certificate file and move the file to that directory.

sudo mkdir -p /etc/pki/tls/certs/
mv ~/logstash-forwarder.crt /etc/pki/tls/certs/

Add the elastic key to the server.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Download the Filebeat .deb package and install it with the dpkg command.

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
dpkg -i filebeat-5.1.1-amd64.deb

Go to the filebeat configuration directory and edit the file 'filebeat.yml' with vim.

cd /etc/filebeat/
vim filebeat.yml

Add the new log file paths in the paths configuration section.

  paths:
    - /var/log/auth.log
    - /var/log/syslog

Set the document type to syslog.

  document-type: syslog

Disable elasticsearch output by adding comments to the lines shown below.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Enable logstash output, uncomment the configuration and change the values as shown below.

output.logstash:
  # The Logstash hosts
  hosts: ["10.0.15.10:5443"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

Save the file and exit vim.

Add Filebeat to start at boot time and start it.

sudo systemctl enable filebeat
sudo systemctl start filebeat

Check the service status.

systemctl status filebeat

Filebeat is running on the client Ubuntu

Step 8 - Testing Elastic Stack

Open your web browser and visit the elastic stack domain that you used in the Nginx configuration,  mine is 'elk-stack.co'. Login as admin user with your password and press Enter to log in to the Kibana dashboard.

Login to the Kibana Dashboard with Basic Auth

Create a new default index 'filebeat-*' and click on the 'Create' button.

Create First index filebeat for Kibana

Th default index has been created. If you have multiple beats on the elastic stack, you can configure the default beat with just one click on the 'star' button.

Filebeat index as default index on Kibana Dashboard

Go to the 'Discover' menu and you will see all the log file from the elk-client1 and elk-client2 servers.

Discover all Log Files from the Servers

An example of JSON output from the elk-client1 server log for an invalid ssh login.

JSON output for Failed SSH Login

And there is much more than you can do with Kibana dashboard, just play around with the available options.

Elastic Stack has been installed on a CentOS 7 server. Filebeat has been installed on a CentOS 7 and an Ubuntu client.

Reference

Share this page:

14 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: rob

Fresh Centos7 box, no netstat

 

needed to run

 

sudo yum install net-tools

By: rob

something wrong with step 8 "Unable to fetch mapping. Do you have indicies matching the pattern?" with no "Create" button. Filebeat service is running on the centos client, I do not have an Ubuntu client.

By: rob

worked out the "Unable to fetch mapping" issue. The certifcate in step 5 , I blindly copied the value subjectAltName into  openssl.cnf 

which did not match the elk server ip address I was using, after recreating and copying the certificate file and restarting the services it all works!

maybe add some checks

is there any data getting into elastic to create the index

curl -XGET 'localhost:9200/_cat/indices'

(should see "filebeat" not just ".kibana")

for the client

tail /var/log/filebeat/filebeat

and for the server

tail  /var/log/logstash/logstash-plain.log

The errors were pretty descriptive once I found them. 

"ERR Connecting error publishing events (retrying): x509: certificate is valid for 10.0.15.10, not MY.IP.ADDRESS"

"[ERROR][org.logstash.beats.BeatsHandler] Exception: Connection reset by peer"

thanks for th guide!

By: tux95mail

Thanks for your works.

One error (maybe others but I begin to test it since a lot of days and this is function today),   document-type: syslog -> document_type: syslog

By: suncle

Well done! It's a nice detailed article.

I have translated this article into Chinese, the link is as fallows:

https://github.com/LCTT/TranslateProject/blob/master/translated/tech/20170120%20How%20to%20Install%20Elastic%20Stack%20on%20CentOS%207.md

Immediately, it will be published to Linux.cn(https://linux.cn/). And many chinese linux lovers will see it.

 

By: Sully

when running "systemctl status filebeat" i got these error

 filebeat.service - filebeat

   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)

   Active: failed (Result: start-limit) since ven. 2017-05-05 14:11:34 CEST; 1h 0min ago

     Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html

  Process: 21681 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)

 Main PID: 21681 (code=exited, status=1/FAILURE

what the problem 

By: Pieter

In step 7, when editing the filebeat.yml config file on the client server.

 document-type: syslog

should be

document_type: syslog

Else the Logstash filter will not be applied :

output {  elasticsearch { hosts => ["localhost:9200"]    hosts => "localhost:9200"    manage_template => false    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"    document_type => "%{[@metadata][type]}"  }}

By: jai

 Hello 

I am getting following error

 

[[email protected] nginx]# nginx -t

nginx: [emerg] "server" directive is not allowed here in /etc/nginx/conf.d/kibana.conf:1

nginx: configuration file /etc/nginx/nginx.conf test failed

[[email protected] nginx]#

 

And my kibana is :

server{

    listen 80;

 

    server_name kibana.com;

 

    auth_basic "Restricted Access";

    auth_basic_user_file /etc/nginx/.kibana-user;

 

    location / {

        proxy_pass http://localhost:5601;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;

        proxy_set_header   X-Real-IP $remote_addr;

        proxy_set_header   Host      $http_host;

    }

}

please could you let me know whats the issue with my config?

 

By: Brian

Excellent Article!!.. first article I came across that works from beginning to end as documented.

By: Chris Brett

Hi, Have you guy looked in to https://nxlog.co - it's a free, open source alternative, which is highly scalable and enables high-performance centralized log management. Anyone?

By: rsx

On kibana website I have "Unable to fetch mapping. Do you have indices matches the pattenr?"

When I run tail  /var/log/logstash/logstash-plain.log on CentOS Server the result is:

[[email protected] /]# tail  /var/log/logstash/logstash-plain.log[2017-10-27T22:08:43,763][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"e4464b21-9dc4-4ff7-9de0-a2773c8060b2", :path=>"/var/lib/logstash/uuid"}[2017-10-27T22:08:47,124][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5443"}[2017-10-27T22:08:47,400][INFO ][org.logstash.beats.Server] Starting server on port: 5443[2017-10-27T22:08:48,013][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}[2017-10-27T22:08:48,014][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x2e1d70db URL:http://localhost:9200>, :healthcheck_path=>"/"}[2017-10-27T22:08:48,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2e1d70db URL:http://localhost:9200>}[2017-10-27T22:08:48,396][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}[2017-10-27T22:08:48,576][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}[2017-10-27T22:08:48,587][INFO ][logstash.pipeline        ] Pipeline main started[2017-10-27T22:08:49,133][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

On CentOS client:

22:51:07-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:10-06:00 ERR Connecting error publishing events (retrying): dial tcp 192.168.1.120:5443: getsockopt: no route to [email protected] /]# ping 192.168.1.120PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=0.296 ms

On Ubuntu Client:

[email protected]:~# tail /var/log/filebeat/filebeat2017-10-27T22:50:59-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:29-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:32-06:00 ERR Connecting error publishing events (retrying): dial tcp 192.168.1.120:5443: getsockopt: no route to [email protected]:~# ping 192.168.1.120PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=0.306 ms

By: s.soundarrajan

very useful! know how document, well worked, thanks a lot 

By: Yashwant

Hi Arul,

 

Tutorial was very useful. I have done with all the installations and they were successfull. But somewhere I have went wrong, dont know exactly. So can you please clarify with below queries : 

1) Step 4 : You have used elk-stack.co in /etc/nginx/conf.d/kibana.conf file. I can use my own IP(where kibana is installed) right 

2) Nginx: Nginx service status is active, but when I run netstat command I'm not able to view any service running for port 80 or for nginx  

3) Also, the kibana service gets stopped automatically 

By: Javad

Hi, we should install and configure elastic search, Kibana, Logstash, Nginx and etc on separate servers?

I like to install each service separately if it is possible.

thank you