There is a new version of this tutorial available for Ubuntu 20.04 (Focal Fossa).

How to Install Elastic Stack on Ubuntu 16.04

Elasticsearch is an open source search engine based on Lucene, developed in java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana) and JSON documents scheme. Elasticsearch is a scalable search engine that can be used to search for all types of documents, including log file. Elasticsearch is the heart of the 'Elastic Stack' or ELK Stack.

Logstash is an open source tool for managing system events and logs. It provides real-time pipelining to collect data. Logstash will collect the log or data, convert all data into JSON documents, and store them in Elasticsearch.

Kibana is a data visualization interface for Elasticsearch. Kibana provides a pretty dashboard (web interfaces), it allows you to manage and visualize all data from Elasticsearch on your own. It's not just beautiful, but also powerful.

In this tutorial, I will show you how to install and configure Elastic Stack on a single Ubuntu 16.04 server for monitoring server logs and how to install 'Elastic beats' on client PCs with Ubuntu 16.04 and CentOS 7 operating system.


  • Ubuntu 16.04 64 bit server with 4GB of RAM, hostname - elk-master
  • Ubuntu 16.04 64 bit client with 1 GB of RAM, hostname - elk-client1
  • CentOS 7 64 bit client with 1GB of RAM, hostname - elk-client2

Step 1 - Install Java

Java is required for the Elastic stack deployment. Elasticsearch requires Java 8. It is recommended to use the Oracle JDK 1.8. We will install Java 8 from a PPA repository.

Install the new package 'python-software-properties' so we can add a new repository easily with an apt command.

sudo apt-get update
sudo apt-get install -y python-software-properties software-properties-common apt-transport-https

Add the new Java 8 PPA repository with the 'add-apt-repository' command, then update the repository.

sudo add-apt-repository ppa:webupd8team/java -y
sudo apt-get update

Install Java 8 from the PPA webpub8 repository.

sudo apt-get install -y oracle-java8-installer

When the installation is finished, make sure Java is installed properly on the system by checking the Java version.

java -version

Java Version on Ubuntu 16.04

Step 2 - Install and Configure Elasticsearch

In this step, we will install and configure Elasticsearch. Install Elasticsearch from the elastic repository and configure it to run on the localhost IP.

Before installing Elasticsearch, add the elastic repository key to the server.

wget -qO - | sudo apt-key add -

Add elastic 5.x repository to the 'sources.list.d' directory.

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Update the repository and install Elasticsearch 5.1 with the apt command below.

sudo apt-get update
sudo apt-get install -y elasticsearch

Elasticsearch is installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.

cd /etc/elasticsearch/
vim elasticsearch.yml

Enable memory lock for Elasticsearch by removing the comment on line 43. We do this to disable swapping memory for Elasticsearchto avoid overloading the server.

bootstrap.memory_lock: true

In the 'Network' block, uncomment the and http.port lines. localhost
http.port: 9200

Save the file and exit vim.

Now edit the elasticsearch service file for the memory lock mlockall configuration.

vim /usr/lib/systemd/system/elasticsearch.service

Uncomment LimitMEMLOCK line.


Save the file and exit.

Edit the default configuration for Elasticsearch in the /etc/default directory.

vim /etc/default/elasticsearch

Uncomment line 60 and make sure the value is 'unlimited'.


Save and exit.

The Elasticsearch configuration is finished. Elasticsearch will run under localhost IP address with port 9200 and we disabled swap memory by enabling mlockall on the Ubuntu server.

Reload the Elasticsearch service file and enable it to run on the boot time, then start the service.

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

Wait a sec for Elasticsearch to run, then check the open port on the server, make sure the 'state' for port 9200 is 'LISTEN'.

netstat -plntu

Install Elasticsearch on Ubuntu 16.04

Then check the memory lock to ensure that mlockall is enabled. Also check that Elasticsearch is running with the commands below.

curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
curl -XGET 'localhost:9200/?pretty'

You will see the results below.

mlockall enabled and elasticsearch installed

Step 3 - Install and Configure Kibana with Nginx

In this step, we will install and configure Kibana behind a Nginx web server. Kibana will listen on the localhost IP address only and Nginx acts as the reverse proxy for the Kibana application.

Install Kibana with this apt command:

sudo apt-get install -y kibana

Now edit the kibana.yml configuration file.

vim /etc/kibana/kibana.yml

Uncomment the server.port, server.hos and elasticsearch.url lines.

server.port: 5601 "localhost"
elasticsearch.url: "http://localhost:9200"

Save the file and exit vim.

Add Kibana to run at boot and start it.

sudo systemctl enable kibana
sudo systemctl start kibana

Kibana will run on port 5601 as node application.

netstat -plntu

Install Kibana on ubuntu 16.04

Kibana installation is done, now we need to install Nginx and configure it as a reverse proxy to be able to access Kibana from the public IP address.

Next, install the Nginx and apache2-utils packages.

sudo apt-get install -y nginx apache2-utils

Apache2-utils is a package that contains tools for the webserver that work with Nginx as well, we will use htpasswd basic authentication for Kibana.

Nginx has been installed. Now we need to create a new virtual host configuration file in the Nginx sites-available directory. Create a new file 'kibana' with vim.

cd /etc/nginx/
vim sites-available/kibana

Paste configuration below.

server {
    listen 80;


    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

Save the file and exit vim

Create a new basic authentication file with the htpasswd command.

sudo htpasswd -c /etc/nginx/.kibana-user admin

Activate the kibana virtual host by creating a symbolic link from the kibana file in 'sites-available' to the 'sites-enabled' directory.

ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/

Test the nginx configuration and make sure there is no error, then add nginx to run at boot time and restart nginx.

nginx -t
systemctl enable nginx
systemctl restart nginx

Kibana with nginx installed on Ubuntu 16.04

Step 4 - Install and Configure Logstash

In this step, we will install and configure Logsatash to centralize server logs from client sources with filebeat, then filter and transform all data (Syslog) and transport it to the stash (Elasticsearch).

Install Logstash 5 with the apt command below.

sudo apt-get install -y logstash

Edit the hosts file with vim.

vim /etc/hosts

Add the server IP address and hostname.    elk-master

Save the hosts file and exit the editor.

Now generate a new SSL certificate file with OpenSSL so the client sources can identify the elastic server.

cd /etc/logstash/
openssl req -subj /CN=elk-master -x509 -days 3650 -batch -nodes -newkey rsa:4096 -keyout logstash.key -out logstash.crt

Change the '/CN' value to the elastic server hostname.

Certificate files will be created in the '/etc/logstash/' directory.

Next, we will create the configuration files for logstash. We will create a configuration file 'filebeat-input.conf' as input file from filebeat, 'syslog-filter.conf' for syslog processing, and then a 'output-elasticsearch.conf' file to define the Elasticsearch output.

Go to the logstash configuration directory and create the new configuration files in the 'conf.d' directory.

cd /etc/logstash/
vim conf.d/filebeat-input.conf

Input configuration, paste configuration below.

input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/logstash.crt"
    ssl_key => "/etc/logstash/logstash.key"

Save and exit.

Create the syslog-filter.conf file.

vim conf.d/syslog-filter.conf

Paste the configuration below.

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

We use a filter plugin named 'grok' to parse the syslog files.

Save and exit.

Create the output configuration file 'output-elasticsearch.conf'.

vim conf.d/output-elasticsearch.conf

Paste the configuration below.

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"

Save and exit.

When this is done, add logstash to start at boot time and start the service.

sudo systemctl enable logstash
sudo systemctl start logstash

Install and Configure Logstash on Ubuntu 16.04

Step 5 - Install and Configure Filebeat on an Ubuntu Client

Connect to the server as root with an ssh account.

ssh [email protected]

Copy the certificate file to the client with the scp command.

scp [email protected]:/etc/logstash/logstash.crt .

Edit the hosts file and add the elk-master IP address.

vim /etc/hosts

Add the configuration below at the end of the file.    elk-master

Save and exit.

Now we need to add the elastic key to the elk-client1 server.

wget -qO - | sudo apt-key add -

We will use the elastic repository with https download transport, so we need to install the package 'apt-transport-https' to the server.

sudo apt-get install -y apt-transport-https

Add the elastic repository and update all Ubuntu repositories.

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt-get update

Now install 'filebeat' with the apt command.

sudo apt-get install -y filebeat

Next, go to the filebeat configuration directory and edit the file 'filebeat.yml' with vim.

cd /etc/filebeat/
vim filebeat.yml

Add new log files under paths configuration.

    - /var/log/auth.log
    - /var/log/syslog

Set the document type to 'syslog'.

  document-type: syslog

Disable elasticsearch output by adding comments to the lines.

#-------------------------- Elasticsearch output ------------------------------
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Enable logstash output, uncomment the configuration and change the value as below.

  # The Logstash hosts
  hosts: ["elk-master:5443"]
  bulk_max_size: 2048
  ssl.certificate_authorities: ["/etc/filebeat/logstash.crt"] "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

Save and exit.

Move the certificate file to the filebeat directory.

mv ~/logstash.crt /etc/filebeat/

Start filebeat and add it to run at boot time.

sudo systemctl start filebeat
sudo systemctl enable filebeat

Check the service status.

sudo systemctl status filebeat

Install Filebeat on Ubuntu 16.04 Client

Step 6 - Install and Configure Filebeat on a CentOS Client

Beats are data shippers, lightweight agents that can be installed on the client nodes to send huge amounts of data from the client machine to the Logstash or Elasticsearch server. There are 4 beats available, 'Filebeat' for 'Log Files', 'Metricbeat' for 'Metrics', 'Packetbeat' for 'Network Data' and 'Winlogbeat' for the Windows client 'Event Log'.

In this tutorial, I will show you how to install and configure 'Filebeat' to ship log data to the logstash server over a secure SSL connection.

Copy the certificate file from the elastic server to the client1 server. Login to the client1 server.

ssh [email protected]

Copy the certificate file with the scp command.

scp [email protected]:/etc/logstash/logstash.crt .
TYPE elk-server password

Edit the hosts file and add the elk-master server address.

vim /etc/hosts

Add elk-master server address.    elk-master

Save and exit.

Next, import the elastic key to the elk-client2 server.

rpm --import

Add elastic repository to the server.

cd /etc/yum.repos.d/
vim elastic.repo

Paste the configuration below.

name=Elastic repository for 5.x packages

Save and exit.

Install filebeat with this yum command.

sudo yum -y install filebeat

Filebeat has been installed, now go to the configuration directory and edit the file 'filebeat.yml'.

cd /etc/filebeat/
vim filebeat.yml

On the paths section line 21, add some new log files, we will add two files here: '/var/log/secure' for ssh activity and '/var/log/messages' for the server log.

    - /var/log/secure
    - /var/log/messages

Add a new configuration on line 26 to define the file type to 'syslog'.

  document-type: syslog

By default, filebeat is using elasticsearch as the output. In this tutorial, we will change it to logshtash. Disable elasticsearch output by adding comments to the lines 83 and 85.

Disable elasticsearch output.

#-------------------------- Elasticsearch output ------------------------------
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Now add the new logstash output configuration, uncomment logstash output configuration and change all values to the ones that are shown in the configuration below.

  # The Logstash hosts
  hosts: ["elk-master:5443"]
  bulk_max_size: 2048
  ssl.certificate_authorities: ["/etc/filebeat/logstash.crt"] "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

Save and exit.

Add filebeat to start at boot time and start it.

sudo systemctl enable filebeat
sudo systemctl start filebeat

Now you can check and watch the filebeat log file to ensure it is running correctly.

tail -f /var/log/filebeat/filebeat

Install Filebeat on CentOS 7 Client Server

Step 8 - Testing

Open your web browser and visit the elastic stack domain that you configured in the nginx configuration, mine is '', type in the admin user name with your password and press Enter to log into the Kibana dashboard.

Login to the elastic kibana dashboard

Create a new default index 'filebeat-*' and click on 'Create'.

Create first index filebeat on Kibana Dashboard

The default index has been created. If you have multiple beats on the elastic stack, you can configure a default beat with just a click on the 'star' button.

Filebeat index created as default index

Go to 'Discover' and you will see all the log files from the elk-client1 and elk-client2 servers.

Discover all log from elk-client1 and elk-client2 server

An example of JSON output from the elk-client1 server log for an invalid ssh login.

Example log file for ssh failed login on elk-client1 server

And there is much more you can do with Kibana dashboard, just try it out!

Elastic Stack has been installed on an Ubuntu 16.04 server, filebeat has been installed on Ubuntu and CentOS client servers.


Share this page:

Suggested articles

19 Comment(s)

Add comment


By: kreaper


Not sure how much has changed in the packages in one day (since this post), but already found a problem... (I'm also using an updated fresh install of 16.04)

You say "Now edit the elasticsearch service file for the memory lock mlockall configuration."/usr/lib/systemd/system/elasticsearch.service

However, this file doesn't exist. The folder you specify also doesn't exist. I've also done a search for this. Is there something I'm missing here?

By: thctlo

Even is /usr/lib/systemd/system/elasticsearch.service exists.

vim /usr/lib/systemd/system/elasticsearch.service on ubuntu isnt right. You should try to never adjust to original files. Edit it like this :

sudo systemctl edit elasticsearch.service  Which results in Editing "/etc/systemd/system/elasticsearch.service.d/override.conf"



By: kreaper

Yes I'm aware of this, but for now, I'm just trying to follow the above steps step by step and even though I'm using a fresh updated install of 16.04, I don't find these steps the same on my install.

By: howtoforge

We tested the tutorial here again today and the steps from the tutorial are working on Ubuntu 16.04. There was one typo in the command that installs kibana which has been corrected now.

By: Soner CAKIR

I've tried 3 times with fresh Ubuntu 16.04 but allways stuck int his part "mv ~/logstash.crt /etc/filebeat/" it says "mv: cannot stat '/etc/logstash/logstash.crt': No such file or directory",on the server side (elkserver) checking /etc/logstash/ folder and logstash.crt is there,but it returns error when you try to mv. Any ideas ?

By: muhammad

You need to upload the file 'logstash.crt' from the 'elk-master' server to the 'client ubuntu or centos' with scp command.

On the elk-master - run as root:

ls /etc/logstash/logstash.crt

Make sure you have that file on the 'elk-master' server.

Connect to the elk-client.

ssh [email protected]

From the elk-client terminal, download 'logstash.crt' file from elk-master to elk-client with scp.

cd ~/

scp [email protected]:/etc/logstash/logstash.crt .

'logstash.crt' file on the home directory of the elk-client.

move to the '/etc/filebeat/' directory.

mv ~/logstash.crt /etc/filebeat/

Hopefully, it can help you

By: ^_^

Running Ubuntu 16.04.2 and on part 2 after starting the services. The system host is not listining on port 9200.  Any ideas?  Going step-by-step

sudo systemctl daemon-reloadsudo systemctl enable elasticsearchsudo systemctl start elasticsearch

Wait a sec for Elasticsearch to run, then check the open port on the server, make sure the 'state' for port 9200 is 'LISTEN'.

netstat -plntu

By: SamCloud

Specify localhost not the IPv4 ;)

By: arek

this same problem here....frsh install and no listening 9200 port......everything was installed step by step....

By: czuk

I know how old and grizzled my eyes are but I can't find a mention of removing the nginx default site link, which stops the reverse proxy from listening on port 80.  Seems a sudo rm /etc/nginx/sites-enabled/default is needed

By: mby

Hey guys,

It seems like my cert is not working properly, when I look at tail -f /var/log/filebeat/filebeat, I am getting the following error message in the logs:

"cannot validate certificate for <my elk master ip> because it doesn't contain any IP SANs"

I tried to create new certificate with CN=ubuntu (which is the hostname), but I don't have any DNS on my network. Would it be possible to just add the server IP instead? If yes, how should I write it?

By: zedd

Run : export JAVA_HOME="/usr"

and the service will start correctly :)

By: Daniel

Thanks for this awesome post. It worked out fine!

By: John

Hi, Got some issues when running filebeat. Will not start on Ubuntu. I have followed all steps to the letter.

Any help is appretiated.

By: Nexusguy59


    This is a very good tuorial, I am little confused though at the logstash and filebeat part on a client. In the configuration you say to use port = 5443 (logstash) and for filebeat (on client) elk-master:5443, in the elasticsearch installation, I don't see port 5443 mentioned any where? Can you explain please?




By: Michael Cooper

Everytime i do this configuration I get this error from Filebeat and i know it's bogus. 2018-01-12T08:03:02Z CRIT Exiting: Error in initing prospector: can not convert 'object' into 'string' accessing 'filebeat.prospectors.0.paths.2' (source:'/etc/filebeat/filebeat.yml')

My certificates are correct and in the right place, Elasticsear logstash and Kibana are working on the ELK-Master server, why is filebeat so difficult to get running. It shouldn't be this hard.




By: Peter

I am like so totaly close on this thing. I'm really impressed with your build instructions. you might want to add a bit about how to get it to run as a server, so taht users can browse to it from other PC's. i think i got it figures out in any case.


I've logged into my kibana GUI and I'm on the:

Management / Kibana

     Index patterns page. 


There is a box to type in my "Index pattern" but then under the box I see a red nasty....


Unable to fetch mapping. Do you have indices matching the pattern?

And there is no "Create" button. Can you help me out oin this one?

By: Umar

The java installer seems to be obsolete;

any idea how this can be resolved?

Reading package lists... Done Building dependency tree Reading state information... Done Package oracle-java8-installer is not available, but is referred to by another p ackage. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'oracle-java8-installer' has no installation candidate

that was very good, thank you very much for this article!!!

Keep posting such useful articles.