欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Logstash

程序员文章站 2022-07-06 16:06:47
...

 

 

 

 

LK可以说是当前对分布式服务器集群日志做汇总、分析、统计和检索操作的很好的一套系统了。而Spring Boot作为一套为微服务而生的框架,自然也免不了处理分布式日志的问题,通过ELK日志系统来处理日志还是很有意义的。在这套系统中,E即为ElasticSearch,负责日志存储;L为LogStash,负责日志收集,并将日志信息写入ElasticSearch,K则为Kibana,负责将ElasticSearch中的日志数据进行可视化及分析检索操作。可以说将Spring Boot与ELK整合很大程度上相当于将Spring Boot与Logstash进行整合。将那么如何将Spring Boot与LogStash整合起来呢?

在Spring Boot当中,默认使用logback进行log操作。和其他日志工具如log4j一样,logback支持将日志数据通过提供IP地址、端口号,以Socket的方式远程发送。在Spring Boot中,通常使用logback-spring.xml来进行logback配置。

要想将logback与Logstash整合,必须引入logstash-logback-encoder包。该包的依赖如下:

<dependency>
      <groupId>net.logstash.logback</groupId>
      <artifactId>logstash-logback-encoder</artifactId>
      <version>5.3</version>
</dependency>

将依赖添加好之后,就可以进行logback的配置了。logback-spring.xml配置如下:

<?xml version="1.0" encoding="utf-8" ?>
<!--该日志将日志级别不同的log信息保存到不同的文件中 -->
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <springProperty scope="context" name="springAppName" source="spring.application.name"/>

    <!-- 日志在工程中的输出位置 -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/>

    <!-- 控制台的日志输出样式 -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <!-- 日志输出编码 -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <!-- 为logstash输出的Appender -->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.1.111:8081</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>

    <!-- 日志输出级别 -->
    <root level="INFO">
        <appender-ref ref="console"/>
        <appender-ref ref="logstash"/>
    </root>
</configuration>

logback的基础配置不多说,百度谷歌上一大把,这里我们只讨论与Logstash相关的部分。在上面的XML文件中的第4行至第7行即为Logstash的配置。destination为日志的发送地址,在配置Logstash的时候选择监听这个地址即可进行日志收集。这里我把Logstash部署在了本地的虚拟机上(其地址就是192.168.1.111),端口为8081。当然这里你需要把地址改成你自己的。下面的encoder是必选项。Console则是为了保证Spring Boot原始的日志配置不被覆盖。这里logback配置完毕。

Logstash方面,在安装目录下新建一个文件夹conf,在conf文件夹下创建文件logstash.conf,内容如下:

# Logstash configuration
# TCP -> Logstash -> Elasticsearch pipeline.

input {
  tcp {
    mode => "server"
    host => "192.168.1.111" //尽量使用IP
    port => 8081            //从本地的8081端口取日志
    codec => json_lines     //需要安装logstash-codec-json_lines插件
  }
}

output {
  elasticsearch {
    hosts => ["http://192.168.1.111:9200"]  //输出到ElasticSearch
    index => "logstash-%{+YYYY.MM.dd}"
  }
  stdout {                              //若不需要在控制台中输出,此行可以删除
    codec => rubydebug
  }
}

如果你的Logstash没有安装logstash-codec-json_lines插件,通过以下命令安装:

[[email protected] ~]# cd /usr/share/logstash/
[[email protected] logstash]# ls
bin  CONTRIBUTORS  data  Gemfile  Gemfile.lock  lib  LICENSE.txt  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack
[[email protected] logstash]# cd bin
[[email protected] bin]# ./logstash-plugin install logstash-codec-json_lines
Validating logstash-codec-json_lines
Installing logstash-codec-json_lines
Installation successful
[[email protected] bin]# 

启动Logstash 暴露出端口8081接受日志  :         

[[email protected] logstash]# logstash -f logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-03-06 14:38:50.990 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-03-06 14:38:51.007 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.5.4"}
[INFO ] 2019-03-06 14:38:54.639 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-03-06 14:38:55.095 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.111:9200/]}}
[WARN ] 2019-03-06 14:38:55.284 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://192.168.1.111:9200/"}
[INFO ] 2019-03-06 14:38:55.549 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-03-06 14:38:55.553 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-03-06 14:38:55.577 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.1.111:9200"]}
[INFO ] 2019-03-06 14:38:55.591 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-03-06 14:38:55.608 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-03-06 14:38:55.644 [[main]>worker7] tcp - Starting tcp input listener {:address=>"localhost:8081", :ssl_enable=>"false"}
[INFO ] 2019-03-06 14:38:55.917 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x67d08165 run>"}
[INFO ] 2019-03-06 14:38:55.952 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-03-06 14:38:56.152 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}

大功告成!最终效果如下:

{
    "logger_name" => "com.amt.hibei.sysframework.config.SysConfig",
    "thread_name" => "main",
     "@timestamp" => 2019-03-06T07:27:26.348Z,
    "level_value" => 20000,
           "host" => "182.148.112.187",
           "port" => 58138,
          "level" => "INFO",
       "@version" => "1",
        "message" => "============系统参数加载完成!=============="
}
{
    "logger_name" => "com.amt.hibei.client.HibeiGameClientHiApplication",
    "thread_name" => "main",
     "@timestamp" => 2019-03-06T07:27:26.259Z,
    "level_value" => 20000,
           "host" => "182.148.112.187",
           "port" => 58138,
          "level" => "INFO",
       "@version" => "1",
        "message" => "Starting HibeiGameClientHiApplication on Amt-PC with PID 4256 (E:\\IdeWorkspace\\hibeigame\\hibeigame-client-HI\\target\\classes started by amt in E:\\IdeWorkspace\\hibeigame)"
}

log-bak.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--该日志将日志级别不同的log信息保存到不同的文件中 -->
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <springProperty scope="context" name="springAppName"
                    source="spring.application.name"/>

    <!-- 日志在工程中的输出位置 -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/>

    <!-- 控制台的日志输出样式 -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <!-- 日志输出编码 -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <!-- 为logstash输出的JSON格式的Appender -->
    <appender name="logstash"
              class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.11.86:9250</destination>
        <!-- 日志输出编码 -->
        <encoder
                class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <!-- 日志输出级别 -->
    <root level="INFO">
        <appender-ref ref="console"/>
        <appender-ref ref="logstash"/>
    </root>
</configuration>

 

相关标签: logstash