ELK日志系统配置部署
程序员文章站
2022-10-02 20:17:14
简介近期公司安排部署一套ELK日志系统,分配给我6台服务器(8C 16G 500G),按照自己的知识分配了如下架构,图画的有些烂,莫怪。准备工作:elastic官网:www.elastic.co | kfka官网:kafka.apache.org下载:elasticsearch、logstash、kibaba、filebeat、kafka服务器配置ip分别为192.168.1.1、192.168.1.2、 192.168.1.3、192.168.1.4、192.168.1.5、192.1...
简介
近期公司安排部署一套ELK日志系统,分配给我6台服务器(8C 16G 500G),按照自己的知识分配了如下架构,图画的有些烂,莫怪。
准备工作:
- elastic官网:www.elastic.co | kfka官网:kafka.apache.org
-
下载:elasticsearch、logstash、kibaba、filebeat、kafka
- 服务器配置ip分别为192.168.1.1、192.168.1.2、 192.168.1.3、192.168.1.4、192.168.1.5、192.168.1.6
解压步骤啥的就不说了,直接贴出配置文件了
Elasticsearch集群部署
elasticsearch常见报错处理:https://www.cnblogs.com/zhi-leaf/p/8484337.html
主节点配置文件(3台)
cluster.name: elk-cluster
node.name: elk-master1 # 其余两台分别改成elk-master2 elk-master3
node.master: true
node.data: false
path.data: /app/data/elasticsearch2/data
path.logs: /app/data/elasticsearch2/logs
path.repo: /app/data/elasticsearch2/backup
bootstrap.memory_lock: false
http.port: 10200
transport.tcp.port: 10300
network.host: 192.168.1.1 # 其余两台分别改成 192.168.1.2 192.168.1.3
discovery.zen.ping.unicast.hosts: ["192.168.1.1:10300","192.168.1.2:10300","192.168.1.3:10300"]
cluster.initial_master_nodes: ["192.168.1.1:10300","192.168.1.2:10300","192.168.1.3:10300"]
数据节点配置文件(6台)
cluster.name: elk-cluster
node.name: elk-data1 # 其余五台分别改成elk-data2 elk-data3 elk-data4 elk-data5 elk-data6
node.master: false
node.data: true
path.data: /app/data/elasticsearch/data
path.logs: /app/data/elasticsearch/logs
path.repo: /app/data/elasticsearch/backup
bootstrap.memory_lock: false
http.port: 9200
network.host: 192.168.1.1 # 其余五台分别改成192.168.1.2 192.168.1.3 192.168.1.4 192.168.1.5 192.168.1.6
discovery.zen.ping.unicast.hosts: ["192.168.1.1:10300","192.168.1.2:10300","192.168.1.3:10300"]
cluster.initial_master_nodes: ["192.168.1.1:10300","192.168.1.2:10300","192.168.1.3:10300"]
logstash部署
logstash启动命令: /app/logstash-7.6.0/bin/logstash -f /app/logstash-7.6.0/config/smartproperty-manage-log.yml --config.reload.automatic --path.data /app/data/logstash/data_1
多个logstash启动修改"–path.data"路径
input {
kafka{
bootstrap_servers => ["192.168.1.1:10300","192.168.1.2:10300","192.168.1.3:10300"]
group_id => "logstash-app-dage" # kafka 消费组id,两台logstash配置相同group id实现高可用
auto_offset_reset => earliest
consumer_threads => 3 # 消费线程,和kafka分区一致,多余kafka分区也是闲着
decorate_events => true
topics => ["log-app-dage"] # filebeat输出到kafka的topic name
codec => json
}
}
filter {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_time}"} # 匹配生产日志中的时间戳
}
date {
match => ["log_time", "yyyy-MM-dd HH:mm:ss.SSS", "ISO8601"] # 匹配logtime时间戳
target => "@timestamp" # 将匹配的生产日志中的时间戳替换为@timestamp
}
mutate {
add_field => { "appname" => "dage"} # 添加字段
remove_field => ["agent", "ecs", "log_time"] # 移除字段
}
}
output {
# 判断不同日志级别输出到不同的索引
if [loglevel] == "info" {
elasticsearch {
hosts => ["192.168.1.1:9200"]
index => "log-app-dage-info-%{+YYYY.MM.dd}"
}
}
if [loglevel] == "error" {
elasticsearch {
hosts => ["192.168.1.1:9200"]
index => "log-app-dage-error-%{+YYYY.MM.dd}"
}
}
if [loglevel] == "debug" {
elasticsearch {
hosts => ["192.168.1.1:9200"]
index => "log-app-dage-debug-%{+YYYY.MM.dd}"
}
}
if [loglevel] == "warn" {
elasticsearch {
hosts => ["192.168.1.1:9200"]
index => "log-app-dage-warn-%{+YYYY.MM.dd}"
}
}
}
filebeat配置文件
filebeat.inputs:
- type: log
enabled: true
fields_under_root: true
paths:
- /app/app-dage/logs/app-error.log
fields:
hostIP: '192.168.1.7'
loglevel: 'error'
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
- type: log
enabled: true
fields_under_root: true
paths:
- /app/app-dage/logs/app-info.log
fields:
hostIP: '192.168.1.7'
loglevel: 'info'
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
- type: log
enabled: true
fields_under_root: true
paths:
- /app/app-dage/logs/app-debug.log
fields:
hostIP: '192.168.1.7'
loglevel: 'debug'
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
- type: log
enabled: true
fields_under_root: true
paths:
- /app/app-dage/logs/app-warn.log
fields:
hostIP: '192.168.1.7'
loglevel: 'warn'
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
output.kafka:
enabled: true
hosts: ["192.168.1.4:9092", "192.168.1.5:9092", "192.168.1.6:9092"]
topic: 'log-app-dage'
kafka和zookeeper配置文件
broker.id=1
advertised.listeners=PLAINTEXT://192.168.1.4:9092 # 其他两台分别修改为 192.168.1.5:9092 192.168.1.6:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
zookeeper.connect=192.168.1.4:2181,192.168.1.5:2181,192.168.1.6:2181
log.dirs=/app/data/kafka/log/kafka
num.partitions=3
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.roll.hours=168
kafka常用命令
- bin/kafka-topics.sh --bootstrap-server 192.168.1.4:9092 --list # 查看所有topic
- bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.4:9092,192.168.1.5:9092,192.168.1.6:9092 --topic log-app-dage–from-beginning # 查看指定topic中的消息
所有进程统一使用supervisor管理,配置文件参数如下
[program:kafka]
command=/app/kafka_2.13-2.4.0/bin/kafka-server-start.sh /app/kafka_2.13-2.4.0/config/server.properties # 启动命令
numprocs=1
priority=1
autostart=true
satrtretries=3
autorestart=true
stopasgroup=true
killasgroup=true
redirect_stderr=true
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=10
stdout_logfile=/var/log/supervisor/supervisor_kafka.log # 进程的日志
environment=JAVA_HOME=/app/jdk1.8.0_221 # JAVA_HOME
下图是我supervisor管理的进程截图
本文地址:https://blog.csdn.net/weixin_39845407/article/details/107418267
上一篇: Redis主从复制模式
下一篇: Gateway网关