欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Logstash

程序员文章站 2022-07-06 15:46:41
...

Logstash

一、Logstash简介

Logstash 是一个实时数据收集引擎,可收集各类型数据并对其进行分析,过滤和归纳。按照自己条件分析过滤出符合数据导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输,数据标准格式处理,数据格式化输出等的功能,常用于日志处理。工作流程分为三个阶段:

(1)input数据输入阶段,可接收oracle、mysql、postgresql、file等多种数据源;
  (2)filter数据标准格式化阶段,可过滤、格式化数据,如格式化时间、字符串等;
  (3)output数据输出阶段,可输出到elasticsearch、mongodb、kfka等接收终端。

二、下载

https://artifacts.elastic.co/downloads/logstash/logstash-6.3.1.tar.gz

三.案例实战

一:控制台 To 控制台

shell
[[email protected] logstash-6.4.3]# bin/logstash -e ‘input { stdin { } } output { stdout {} }’

二:控制台 To ES

[[email protected] bin]# cat test_es.conf

input{
stdin{}
}
output{
elasticsearch{
hosts=>[“主机名:9200”]
index=>“testeslogstash”
}
stdout{codec=>rubydebug}
}

三:jason文件 To ES

[[email protected] mnt]# cat data.json
{“age”:16,“name”:“tom”}
{“age”:11,“name”:“tsd”}

[[email protected] bin]# cat json.conf
input{
file{
path=>"/mnt/data.json"
start_position=>“beginning”
sincedb_path=>"/dev/null"
codec=>json{
charset=>“ISO-8859-1”
}
}
}
output{
elasticsearch{
hosts=>“http://主机名:9200”
index=>“jsontestlogstash”
document_type=>“doc”
}
stdout{}
}

增加部分字段

[[email protected] bin]# cat json.conf
input{
file{
path=>"/mnt/data.json"
start_position=>“beginning”
sincedb_path=>"/dev/null"
codec=>json{
charset=>“ISO-8859-1”
}
}
}
filter{
mutate {
remove_field => “@timestamp”
remove_field => “@version”
remove_field => “host”
remove_field => “path”
}
}
output{
elasticsearch{
hosts=>“http://主机名:9200”
index=>“jsontestlogstash”
document_type=>“doc”
}
stdout{}
}

四:时区\类型转换\字段删除

#输入
input {
file {
path => [“文件路径”]
#自定义类型
type => “自定义”
start_position => “beginning”
}
}

#过滤器
filter{
#去除换行符
mutate{
gsub => [ “message”, “\r”, “” ]
}

#逗号分割
mutate {
split => [“message”,","]
}

#分割后,字段命名与赋值
mutate{
add_field => {
“id” => “%{[message][0]}”
“cc” => “%{[message][5]}”
“bcc” => “%{[message][6]}”
“from_user” => “%{[message][7]}”
“size” => “%{[message][8]}”
“attachments” => “%{[message][9]}”
“content” => “%{[message][10]}”
}
}

#字段里的日期识别,以及时区转换,生成date
date {
match => [ “mydate”, “MM/dd/yyyy HH:mm:ss” ]
target => “date”
locale => “en”
timezone => “+00:00”
input {
file {
path => [“文件路径”]
#自定义类型
type => “自定义”
start_position => “beginning”
}

#过滤器
filter{
#去除换行符
mutate{
gsub => [ “message”, “\r”, “” ]
#逗号分割
split => [“message”,","]
}
#分割后,字段命名与赋值
mutate{
“id” => “%{[message][0]}”
“user” => “%{[message][2]}”
“pc” => “%{[message][3]}”
“cc” => “%{[message][5]}”
“bcc” => “%{[message][6]}”
“from_user” => “%{[message][7]}”
“attachments” => “%{[message][9]}”
“content” => “%{[message][10]}”
}
}
#字段里的日期识别,以及时区转换,生成date
date {
match => [ “mydate”, “MM/dd/yyyy HH:mm:ss” ]
target => “date”
timezone => “+00:00”
}
#删除无用字段
mutate {
remove_field => “message”
remove_field => “mydate”
remove_field => “@version”
remove_field => “host”
remove_field => “path”
}
#将两个字段转换为整型
mutate{
convert => { “size” => “integer” }
convert => { “attachments” => “integer” }
}
}

#输出,输出目标为es
output {
#stdout { codec => rubydebug }
elasticsearch {
#目标主机
host => [“目标主机1”,“目标主机2”]
#协议类型
protocol => “http”
#索引名
index =>“自定义”
#type
document_type=>“自定义” }
}

五:Logstach将mysql数据导入ES

一、使用Logstash将mysql数据导入elasticsearch

1、在mysql中准备数据:

mysql> show tables;
±---------------+
| Tables_in_yang |
±---------------+
| im |
±---------------+
1 row in set (0.00 sec)

mysql> select * from im;
±—±-----+
| id | name |
±—±-----+
| 2 | MSN |
| 3 | QQ |
±—±-----+
2 rows in set (0.00 sec)

2、简单实例配置文件准备:

[[email protected] bin]# cat mysqles.conf
input {
stdin {}
jdbc {
type => “jdbc”
jdbc_connection_string => “jdbc:mysql://mysql:3306/logToes?characterEncoding=UTF-8&autoReconnect=true”
# 数据库连接账号密码;
jdbc_user => “root”
jdbc_password => “000000”
# MySQL依赖包路径;
jdbc_driver_library => “/opt/software/mysql-connector-java.jar”
# the name of the driver class for mysql
jdbc_driver_class => “com.mysql.jdbc.Driver”
statement => "SELECT * FROM im"
}
}
output {
elasticsearch {
# 配置ES集群地址
hosts => [“itstar111:9200”]
# 索引名字,必须小写
index => “im”
}
stdout {
}
}

单表同步

input {
stdin {}
jdbc {
type => “jdbc”
# 数据库连接地址
jdbc_connection_string => “jdbc:mysql://mysql:3306/logToes?characterEncoding=UTF-8&autoReconnect=true”"
# 数据库连接账号密码;
jdbc_user => “root”
jdbc_password => “000000”
# MySQL依赖包路径;
jdbc_driver_library => “/opt/software/mysql-connector-java.jar”
# the name of the driver class for mysql
jdbc_driver_class => “com.mysql.jdbc.Driver”
# 数据库重连尝试次数
connection_retry_attempts => “3”
# 判断数据库连接是否可用,默认false不开启
jdbc_validate_connection => “true”
# 数据库连接可用校验超时时间,默认3600S
jdbc_validation_timeout => “3600”
# 开启分页查询(默认false不开启);
jdbc_paging_enabled => “true”
# 单次分页查询条数(默认100000,若字段较多且更新频率较高,建议调低此值);
jdbc_page_size => “500”
# statement为查询数据sql,如果sql较复杂,建议配通过statement_filepath配置sql文件的存放路径;
# sql_last_value为内置的变量,存放上次查询结果中最后一条数据tracking_column的值,此处即为ModifyTime;
# statement_filepath => “mysql/jdbc.sql”
statement => “SELECT KeyId,TradeTime,OrderUserName,ModifyTime FROM DetailTab WHERE ModifyTime>= :sql_last_value order by ModifyTime asc”
# 是否将字段名转换为小写,默认true(如果有数据序列化、反序列化需求,建议改为false);
lowercase_column_names => false
# Value can be any of: fatal,error,warn,info,debug,默认info;
sql_log_level => warn
#
# 是否记录上次执行结果,true表示会将上次执行结果的tracking_column字段的值保存到last_run_metadata_path指定的文件中;
record_last_run => true
# 需要记录查询结果某字段的值时,此字段为true,否则默认tracking_column为timestamp的值;
use_column_value => true
# 需要记录的字段,用于增量同步,需是数据库字段
tracking_column => “ModifyTime”
# Value can be any of: numeric,timestamp,Default value is “numeric”
tracking_column_type => timestamp
# record_last_run上次数据存放位置;
last_run_metadata_path => “mysql/last_id.txt”
# 是否清除last_run_metadata_path的记录,需要增量同步时此字段必须为false;
clean_run => false
#
# 同步频率(分 时 天 月 年),默认每分钟同步一次;
schedule => “* * * * *”
}
}
filter {
json {
source => “message”
remove_field => [“message”]
}
# convert 字段类型转换,将字段TotalMoney数据类型改为float;
mutate {
convert => {
“TotalMoney” => “float”
}
}
}
output {
elasticsearch {
# 配置ES集群地址
hosts => [“192.168.1.1:9200”, “192.168.1.2:9200”, “192.168.1.3:9200”]
# 索引名字,必须小写
index => “consumption”
}
stdout {
codec => json_lines
}
}

多表同步

input {
stdin {}
jdbc {
# 多表同步时,表类型区分,建议命名为“库名_表名”,每个jdbc模块需对应一个type;
type => “TestDB_DetailTab”
# 其他配置此处省略,参考单表配置
# …
# …
# record_last_run上次数据存放位置;
last_run_metadata_path => “mysql\last_id.txt”
# 是否清除last_run_metadata_path的记录,需要增量同步时此字段必须为false;
clean_run => false
#
# 同步频率(分 时 天 月 年),默认每分钟同步一次;
schedule => “* * * * *”
}
jdbc {
# 多表同步时,表类型区分,建议命名为“库名_表名”,每个jdbc模块需对应一个type;
type => “TestDB_Tab2”
# 多表同步时,last_run_metadata_path配置的路径应不一致,避免有影响;
# 其他配置此处省略
# …
# …
}
}

filter {
json {
source => “message”
remove_field => [“message”]
}
}

output {
# output模块的type需和jdbc模块的type一致
if [type] == “TestDB_DetailTab” {
elasticsearch {
# host => “192.168.1.1”
# port => “9200”
# 配置ES集群地址
hosts => [“192.168.1.1:9200”, “192.168.1.2:9200”, “192.168.1.3:9200”]
# 索引名字,必须小写
index => “detailtab1”
# 数据唯一索引(建议使用数据库KeyID)
document_id => “%{KeyId}”
}
}
if [type] == “TestDB_Tab2” {
elasticsearch {
# host => “192.168.1.1”
# port => “9200”
# 配置ES集群地址
hosts => [“192.168.1.1:9200”, “192.168.1.2:9200”, “192.168.1.3:9200”]
# 索引名字,必须小写
index => “detailtab2”
# 数据唯一索引(建议使用数据库KeyID)
document_id => “%{KeyId}”
}
}
stdout {
codec => json_lines
}
}

全量同步

(1分钟同步一次)mysql数据导入到elasticsearch

input {
stdin {}
jdbc {
type => “jdbc”
jdbc_connection_string => “jdbc:mysql://192.168.200.100:3306/yang?characterEncoding=UTF-8&autoReconnect=true”
# 数据库连接账号密码;
jdbc_user => “root”
jdbc_password => “010209”
# MySQL依赖包路径;
jdbc_driver_library => “/mnt/mysql-connector-java-5.1.38.jar”
# the name of the driver class for mysql
jdbc_driver_class => “com.mysql.jdbc.Driver”
statement => "SELECT * FROM im"
schedule => “* * * * *”
}
}
output {
elasticsearch {
# 配置ES集群地址
hosts => [“192.168.200.100:9200”]
# 索引名字,必须小写
index => “im”
}
stdout {
}
}

增量同步

使用logstash增量同步(1分钟同步一次)mysql数据导入到elasticsearch

input {
stdin {}
jdbc {
type => “jdbc”
jdbc_connection_string => “jdbc:mysql://192.168.200.100:3306/yang?characterEncoding=UTF-8&autoReconnect=true”
# 数据库连接账号密码;
jdbc_user => “root”
jdbc_password => “010209”
# MySQL依赖包路径;
jdbc_driver_library => “/mnt/mysql-connector-java-5.1.38.jar”
# the name of the driver class for mysql
jdbc_driver_class => “com.mysql.jdbc.Driver”
#是否开启分页
jdbc_paging_enabled => “true”
#分页条数
jdbc_page_size => “50000”
# 执行的sql 文件路径+名称
#statement_filepath => “/data/my_sql2.sql”
#SQL语句,也可以使用statement_filepath来指定想要执行的SQL
statement => “SELECT * FROM im where id > :sql_last_value”
#每一分钟做一次同步
schedule => “* * * * *”
#是否将字段名转换为小写,默认true(如果有数据序列化、反序列化需求,建议改为false)
lowercase_column_names => false
# 是否记录上次执行结果,true表示会将上次执行结果的tracking_column字段的值保存到last_run_metadata_path指定的文件中;
record_last_run => true
# 需要记录查询结果某字段的值时,此字段为true,否则默认tracking_column为timestamp的值;
use_column_value => true
# 需要记录的字段,用于增量同步,需是数据库字段
tracking_column => “id”
# record_last_run上次数据存放位置;
last_run_metadata_path => “/mnt/sql_last_value”
#是否将字段名转换为小写,默认true(如果有数据序列化、反序列化需求,建议改为false)
clean_run => false
}
}
output {
elasticsearch {
# 配置ES集群地址
hosts => [“192.168.200.100:9200”]
# 索引名字,必须小写
index => “im”
}
stdout {
}
}

注意标红色的部分:这些配置是为了达到增量同步的目的,每次同步结束之后会记录最后一条数据的tracking_column列,比如我们这设置的是id,就会将这个值记录在last_run_metadata_path中。

下次在执行同步的时候会将这个值,赋给sql_last_value

六:logstach抽取mysql数据到kafka

一、Logstash对接kafka测通

说明:

由于我这里kafka是伪分布式,且kafka在伪分布式下,已经集成了zookeeper。

1、先将zk启动,如果是在伪分布式下,kafka已经集成了zk

[[email protected] zookeeperData]# nohup /mnt/kafka/bin/zookeeper-server-start.sh /mnt/kafka/config/zookeeper.properties &

2、启动broker

[[email protected] mnt]# nohup /mnt/kafka/bin/kafka-server-start.sh /mnt/kafka/config/server.properties &

3、创建topic

[[email protected] bin]# ./kafka-topics.sh --create --zookeeper 192.168.200.100:2181 --topic test --partition 1 --replication-factor 1
Created topic “test”.

4、创建消费者

[[email protected] bin]# ./kafka-console-consumer.sh --topic test --zookeeper localhost:2181

5、配置Logstash对接kafka的配置文件

input{
stdin{}
}
output{
kafka{
topic_id => “test”
bootstrap_servers => “192.168.200.100:9092” # kafka的地址
# batch_size => 5
}
stdout{
codec => rubydebug
}
}

二、使用Logstash抽取mysql数据到kafka

input {
stdin {}
jdbc {
type => “jdbc”
jdbc_connection_string => “jdbc:mysql://192.168.200.100:3306/yang?characterEncoding=UTF-8&autoReconnect=true”
# 数据库连接账号密码;
jdbc_user => “root”
jdbc_password => “010209”
# MySQL依赖包路径;
jdbc_driver_library => “/mnt/mysql-connector-java-5.1.38.jar”
# the name of the driver class for mysql
jdbc_driver_class => “com.mysql.jdbc.Driver”
statement => "SELECT * FROM im"
}
}
output {
kafka{
topic_id => “test”
bootstrap_servers => “192.168.200.100:9092” # kafka的地址
batch_size => 5
}
stdout {
}
}

七:Nginx -> Logstash -> Elasticsearch -> kibana

安装基本环境

yum -y install gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel

下载Nginx地址:https://nginx.org/en/download.html

准备好jar包上传:nginx-1.12.2.tar.gz

#解压
tar -zxvf nginx-1.12.0.tar.gz

#进入目录后执行
./configure
make
make install

#编译后的路径在
cd /usr/local/nginx/sbin/

相关命令

./nginx
./nginx -s stop
./nginx -s quit
./nginx -s reload

添加log-json格式

shell
http {
include mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
#access_log  logs/access.log  main;
log_format json '{ "@timestamp": "$time_iso8601", '
     '"remote_addr": "$remote_addr", '
     '"remote_user": "$remote_user", '
     '"body_bytes_sent": "$body_bytes_sent", '
     '"request_time": "$request_time", '
     '"status": "$status", '
     '"request_uri": "$request_uri", '
     '"request_method": "$request_method", '
     '"http_referrer": "$http_referer", '
     '"http_x_forwarded_for": "$http_x_forwarded_for", '
     '"http_user_agent": "$http_user_agent"}';

#日志存储位置
access_log  /usr/local/nginx/logs/access.log  json;

页面UI:

nginx:80

nginxToes.conf

input{
file {
path => “/usr/local/nginx/logs/access.log”
type => “nginx-access-log”
start_position => “beginning”
stat_interval => “2”
}
}
output{
elasticsearch{
hosts =>[“elk161:9200”]
index =>“nginx-access-log-%{+YYYY.MM.dd}.log”
}
}

相关标签: 大数据