CentOS7 elk日志平台搭建
第1章 ELK平台介绍
日志主要包括系统日志、应用程序日志和安全日志。系统运维和开发人员可以通过日志了解服务器软硬件信息、检查配置过程中的错误及错误发生的原因。经常分析日志可以了解服务器的负荷,性能安全性,从而及时采取措施纠正错误。
通常,日志被分散的储存不同的设备上。如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志。这样是不是感觉很繁琐和效率低下。当务之急我们使用集中化的日志管理,例如:开源的syslog,将所有服务器上的日志收集汇总。
集中化管理日志后,日志的统计和检索又成为一件比较麻烦的事情,一般我们使用grep、awk和wc等Linux命令能实现检索和统计,但是对于要求更高的查询、排序和统计等要求和庞大的机器数量依然使用这样的方法难免有点力不从心。
开源实时日志分析ELK平台能够完美的解决我们上述的问题,ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。官方网站:https://www.elastic.co/products
Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。
Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用(如,搜索)。
Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。
第2章 ELK平台搭建
2.1 系统环境
System: Centos release 6.8 (Final)
ElasticSearch: 2.4.6
Logstash: 6.1.2
Kibana: 4.5.4
注:由于Logstash的运行依赖于Java环境, 而Logstash 1.5以上版本不低于java 1.7,因此推荐使用最新版本的Java。因为我们只需要Java的运行环境,所以可以只安装JRE。
2.2 ELK下载:
https://www.elastic.co/downloads/
2.3 ElasticSearch安装
2.3.1 配置ElasticSearch:
tar -zxvf elasticsearch-2.4.6.tar.gz
cd elasticsearch-2.4.6
2.3.2 安装Head插件(Optional):
cd elasticsearch-2.4.6
./bin/plugin install mobz/elasticsearch-head
ELK:/app/elk/elasticsearch # ls
bin config data lib LICENSE.txt logs modules NOTICE.txt plugins README.textile to
ELK:/app/elk/elasticsearch # ./bin/plugin install mobz/elasticsearch-head
ELK:/app/elk/elasticsearch # ls plugins/
head
2.3.3 编辑ES的配置文件:
ELK:/app/elk/elasticsearch # egrep -v "^$|^#" config/elasticsearch.yml
node.name: node-1
path.data: /app/elk/elasticsearch/data # elk存放数据的地址
path.logs: /app/elk/elasticsearch/logs # elk存放日志的地方
network.host: 192.168.1.88 # elk服务器的ip地址
http.port: 9200 # elasticsearch默认端口号
2.3.4 启动ES(使用普通用户启动):
cd elasticsearch-2.4.6
./bin/elasticsearch
2.3.4.1 说明:使用ctrl+C停止。当然,也可以使用后台进程的方式启动ES:
cd elasticsearch-2.4.6
nohub ./bin/elasticsearch &
2.3.5 浏览器查看
从上图可以看出,输出了es的版本等信息
刚刚安装的head插件,它是一个用浏览器跟ES集群交互的插件,可以查看集群状态、集群的doc内容、执行搜索和普通的Rest请求等。现在也可以使用它打开192.168.1.9:9200/_plugin/head页面来查看ES集群状态:
2.3.5.1、查看索引信息
http://10.10.0.37:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-2018.08.27 CFhNdlymRbSIq3NtI7zvZA 5 1 120953 0 80.2mb 80.2mb
yellow open catalina-2018.09.06 QweEdU4oSWGmqHIWXkP-Gw 5 1 2843 0 1.5mb 1.5mb
yellow open catalina-mt-2018.09.10 FhbHGtASSWSdEhIGwoLbow 5 1 710 0 499.4kb 499.4kb
yellow open test _UgNHi1MSLCkwU5uwk_noQ 5 1 1 0 5.7kb 5.7kb
yellow open tomcat-36-it-2018.09.10 xg69v6OcRi6X60oCDbBIyg 5 1 2610 0 1.2mb 1.2mb
yellow open tomcat-36-tz-2018.09.10 QT4qxSMvQSm3Enur8Tv20w 5 1 4192 0 2.3mb 2.3mb
yellow open filebeat-6.4.0-2018.08.27 _zdKjnvtSqCu1U6IurMPjQ 3 1 23 0 27.3kb 27.3kb
yellow open .kibana x510kHetRcejCKTjgPtNcg 1 1 6 1 66.7kb 66.7kb
2.4 Logstash部署
说明:Logstash就是一个收集器而已,我们需要为它指定Input和Output(当然Input和Output可以为多个)。由于我们需要把Java代码中Log4j的日志输出到ElasticSearch中,因此这里的Input就是Log4j,而Output就是ElasticSearch
2.4.1 配置Logstash:
tar -zxvf logstash-6.1.2.tar.gz
cd logstash-6.1.2
2.4.2 编写日志的配置文件
2.4.2.1 Tomcat的日志配置文件如下:
cat json.conf
input {
file {
path => ["/app/tomcat-jz-web-interface/logs/localhost_access_log.*.log"] #日志路径
type => "tomcat_log" # 日志类型
start_position => "beginning"
codec => json
}
}
filter {
date {
match => [ "timestamp" , "YYYY-MM-dd HH:mm:ss" ]
}
}
output {
elasticsearch {
hosts => ["192.168.1.201:9200"] #为elk服务端的ip和端口
index => "tomcat-pc-%{+YYYY.MM.dd}" #日志索引,便于再kibana中查找
}
stdout {
codec => rubydebug
}
}
2.4.3 检查logstash配置文件是否正确
/app/logstash-6.1.2/bin/logstash -t -f /app/tomcat-jz-web-interface/logs/json.conf
说明:注意 -t 和 -f的顺序,不能颠倒
2.4.4 启动logstash服务(使用普通用户启动):
/app/logstash-6.1.2/bin/logstash -f /app/tomcat-jz-web-interface/logs/json.conf
# 以上命令为前台执行
nohub /app/logstash-6.1.2/bin/logstash -t -f /app/tomcat-jz-web-interface/logs/json.conf & 2>&1 >/dev/null
# 放在后台运行,并将输出重定向到空
2.4.4.1 启动后输出的信息如下:
Sending Logstash's logs to /app/logstash-6.1.2/logs which is now configured via log4j2.properties
[2018-08-20T23:48:22,054][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/app/logstash-6.1.2/modules/fb_apache/configuration"}
[2018-08-20T23:48:22,069][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/app/logstash-6.1.2/modules/netflow/configuration"}
[2018-08-20T23:48:22,452][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-20T23:48:23,081][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.1.2"}
[2018-08-20T23:48:23,421][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-20T23:48:27,624][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.201:9200/]}}
[2018-08-20T23:48:27,639][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.1.201:9200/, :path=>"/"}
[2018-08-20T23:48:27,848][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.1.201:9200/"}
[2018-08-20T23:48:27,917][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-08-20T23:48:27,937][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-20T23:48:27,956][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true, "ignore_above"=>256}}}}}, {"float_fields"=>{"match"=>"*", "match_mapping_type"=>"float", "mapping"=>{"type"=>"float", "doc_values"=>true}}}, {"double_fields"=>{"match"=>"*", "match_mapping_type"=>"double", "mapping"=>{"type"=>"double", "doc_values"=>true}}}, {"byte_fields"=>{"match"=>"*", "match_mapping_type"=>"byte", "mapping"=>{"type"=>"byte", "doc_values"=>true}}}, {"short_fields"=>{"match"=>"*", "match_mapping_type"=>"short", "mapping"=>{"type"=>"short", "doc_values"=>true}}}, {"integer_fields"=>{"match"=>"*", "match_mapping_type"=>"integer", "mapping"=>{"type"=>"integer", "doc_values"=>true}}}, {"long_fields"=>{"match"=>"*", "match_mapping_type"=>"long", "mapping"=>{"type"=>"long", "doc_values"=>true}}}, {"date_fields"=>{"match"=>"*", "match_mapping_type"=>"date", "mapping"=>{"type"=>"date", "doc_values"=>true}}}, {"geo_point_fields"=>{"match"=>"*", "match_mapping_type"=>"geo_point", "mapping"=>{"type"=>"geo_point", "doc_values"=>true}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "doc_values"=>true}, "@version"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip", "doc_values"=>true}, "location"=>{"type"=>"geo_point", "doc_values"=>true}, "latitude"=>{"type"=>"float", "doc_values"=>true}, "longitude"=>{"type"=>"float", "doc_values"=>true}}}}}}}}
[2018-08-20T23:48:27,992][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-08-20T23:48:28,066][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.1.201:9200"]}
[2018-08-20T23:48:28,100][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x396a8a59 run>"}
[2018-08-20T23:48:28,364][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"main"}
[2018-08-20T23:48:28,479][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
2.5 Kibana部署
2.5.1 配置Kibana:
tar -zxvf kibana-4.5.4-linux-x86.tar.gz
cd kibana-4.5.4-linux-x86
2.5.1.1 修改以下信息即可:
ELK:/app/elk/kibana # egrep -v "^$|^#" config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.88:9200"
kibana.index: ".kibana"
2.5.2 启动kibana(使用普通用户启动):
cd kibana-4.5.4-linux-x86
./bin/kibana.
2.5.3 浏览器输入host:port访问kibaba的web界面
2.5.4 Kibana图形界面日志时间选择
2.5.5 Kibana生成的日志信息
注:更多操作请自行查找资料