Comments on How to Install Elastic Stack on CentOS 7
In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server for monitoring server logs. Then I'll show you how to install 'Elastic beats' on a CentOS 7 and an Ubuntu 16.04 LTS client operating system.
14 Comment(s)
Comments
Fresh Centos7 box, no netstat
needed to run
sudo yum install net-tools
something wrong with step 8 "Unable to fetch mapping. Do you have indicies matching the pattern?" with no "Create" button. Filebeat service is running on the centos client, I do not have an Ubuntu client.
worked out the "Unable to fetch mapping" issue. The certifcate in step 5 , I blindly copied the value subjectAltName into openssl.cnf
which did not match the elk server ip address I was using, after recreating and copying the certificate file and restarting the services it all works!
maybe add some checks
is there any data getting into elastic to create the index
curl -XGET 'localhost:9200/_cat/indices'
(should see "filebeat" not just ".kibana")
for the client
tail /var/log/filebeat/filebeat
and for the server
tail /var/log/logstash/logstash-plain.log
The errors were pretty descriptive once I found them.
"ERR Connecting error publishing events (retrying): x509: certificate is valid for 10.0.15.10, not MY.IP.ADDRESS"
"[ERROR][org.logstash.beats.BeatsHandler] Exception: Connection reset by peer"
thanks for th guide!
Thanks for your works.
One error (maybe others but I begin to test it since a lot of days and this is function today), document-type: syslog -> document_type: syslog
Well done! It's a nice detailed article.
I have translated this article into Chinese, the link is as fallows:
https://github.com/LCTT/TranslateProject/blob/master/translated/tech/20170120%20How%20to%20Install%20Elastic%20Stack%20on%20CentOS%207.md
Immediately, it will be published to Linux.cn(https://linux.cn/). And many chinese linux lovers will see it.
when running "systemctl status filebeat" i got these error
filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since ven. 2017-05-05 14:11:34 CEST; 1h 0min ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 21681 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)
Main PID: 21681 (code=exited, status=1/FAILURE
what the problem
In step 7, when editing the filebeat.yml config file on the client server.
document-type: syslog
should be
document_type: syslog
Else the Logstash filter will not be applied :
output { elasticsearch { hosts => ["localhost:9200"] hosts => "localhost:9200" manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" }}Hello
I am getting following error
[root@ nginx]# nginx -t
nginx: [emerg] "server" directive is not allowed here in /etc/nginx/conf.d/kibana.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed
[root@ nginx]#
And my kibana is :
server{
listen 80;
server_name kibana.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.kibana-user;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
}
please could you let me know whats the issue with my config?
Excellent Article!!.. first article I came across that works from beginning to end as documented.
Hi, Have you guy looked in to https://nxlog.co - it's a free, open source alternative, which is highly scalable and enables high-performance centralized log management. Anyone?
On kibana website I have "Unable to fetch mapping. Do you have indices matches the pattenr?"
When I run tail /var/log/logstash/logstash-plain.log on CentOS Server the result is:
[root@localhost /]# tail /var/log/logstash/logstash-plain.log[2017-10-27T22:08:43,763][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"e4464b21-9dc4-4ff7-9de0-a2773c8060b2", :path=>"/var/lib/logstash/uuid"}[2017-10-27T22:08:47,124][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5443"}[2017-10-27T22:08:47,400][INFO ][org.logstash.beats.Server] Starting server on port: 5443[2017-10-27T22:08:48,013][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}[2017-10-27T22:08:48,014][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x2e1d70db URL:http://localhost:9200>, :healthcheck_path=>"/"}[2017-10-27T22:08:48,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2e1d70db URL:http://localhost:9200>}[2017-10-27T22:08:48,396][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}[2017-10-27T22:08:48,576][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}[2017-10-27T22:08:48,587][INFO ][logstash.pipeline ] Pipeline main started[2017-10-27T22:08:49,133][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
On CentOS client:
22:51:07-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:10-06:00 ERR Connecting error publishing events (retrying): dial tcp 192.168.1.120:5443: getsockopt: no route to hostoot@localhost /]# ping 192.168.1.120PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=0.296 ms
On Ubuntu Client:
root@ubuntuc1:~# tail /var/log/filebeat/filebeat2017-10-27T22:50:59-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:29-06:00 INFO No non-zero metrics in the last 30s2017-10-27T22:51:32-06:00 ERR Connecting error publishing events (retrying): dial tcp 192.168.1.120:5443: getsockopt: no route to hostroot@ubuntuc1:~# ping 192.168.1.120PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=0.306 ms
very useful! know how document, well worked, thanks a lot
Hi Arul,
Tutorial was very useful. I have done with all the installations and they were successfull. But somewhere I have went wrong, dont know exactly. So can you please clarify with below queries :
1) Step 4 : You have used elk-stack.co in /etc/nginx/conf.d/kibana.conf file. I can use my own IP(where kibana is installed) right
2) Nginx: Nginx service status is active, but when I run netstat command I'm not able to view any service running for port 80 or for nginx
3) Also, the kibana service gets stopped automatically
Hi, we should install and configure elastic search, Kibana, Logstash, Nginx and etc on separate servers?
I like to install each service separately if it is possible.
thank you