elasticsearch+logstash并使用java代码实现日志检索
程序员文章站
2022-06-17 19:14:04
为了项目日志不被泄露,数据展示不采用kibana1、环境准备1.1 创建普通用户#创建用户useradd querylog#设置密码passwd queylog#授权sudo权限查找sudoers文件...
为了项目日志不被泄露,数据展示不采用kibana
1、环境准备
1.1 创建普通用户
#创建用户 useradd querylog #设置密码 passwd queylog #授权sudo权限 查找sudoers文件位置 whereis sudoers #修改文件为可编辑 chmod -v u+w /etc/sudoers #编辑文件 vi /etc/sudoers #收回权限 chmod -v u-w /etc/sudoers #第一次使用sudo会有提示 we trust you have received the usual lecture from the local system administrator. it usually boils down to these three things: #1) respect the privacy of others. #2) think before you type. #3) with great power comes great responsibility. 用户创建完成。
1.2 安装jdk
su queylog cd /home/queylog #解压jdk-8u191-linux-x64.tar.gz tar -zxvf jdk-8u191-linux-x64.tar.gz sudo mv jdk1.8.0_191 /opt/jdk1.8 #编辑/ect/profile vi /ect/profile export java_home=/opt/jdk1.8 export jre_home=$java_home/jre export classpath=.:$java_home/lib:$jre_home/lib:$classpath export path=$java_home/bin:$jre_home/bin:$path #刷新配置文件 source /ect/profile #查看jdk版本 java -verion
1.3 防火墙设置
#放行指定ip firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="172.16.110.55" accept" #重新载入 firewall-cmd --reload
2、安装elasticsearch
2.1 elasticsearch配置
注意:elasticsearch要使用普通用户启动要不然会报错
su queylog cd /home/queylog #解压elasticsearch-6.5.4.tar.gz tar -zxvf elasticsearch-6.5.4.tar.gz sudo mv elasticsearch-6.5.4 /opt/elasticsearch #编辑es配置文件 vi /opt/elasticsearch/config/elasticsearch.yml # 配置es的集群名称 cluster.name: elastic # 修改服务地址 network.host: 192.168.8.224 # 修改服务端口 http.port: 9200 #切换root用户 su root #修改/etc/security/limits.conf 追加以下内容 vi /etc/security/limits.conf * hard nofile 655360 * soft nofile 131072 * hard nproc 4096 * soft nproc 2048 #编辑 /etc/sysctl.conf,追加以下内容: vi /etc/sysctl.conf vm.max_map_count=655360 fs.file-max=655360 #保存后,重新加载: sysctl -p #切换回普通用户 su queylog #启动elasticsearch ./opt/elasticsearch/bin/elasticsearch #测试 curl http://192.168.8.224:9200 #控制台会打印 { "name" : "l_da6oi", "cluster_name" : "elasticsearch", "cluster_uuid" : "es7yp6fvtvc8kmhlutoz6w", "version" : { "number" : "6.5.4", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "d2ef93d", "build_date" : "2018-12-17t21:17:40.758843z", "build_snapshot" : false, "lucene_version" : "7.5.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "you know, for search" }
2.2 把elasticsearch作为服务进行管理
#切换root用户 su root #编写服务配置文件 vi /usr/lib/systemd/system/elasticsearch.service [unit] description=elasticsearch documentation=http://www.elastic.co wants=network-online.target after=network-online.target [service] environment=es_home=/opt/elasticsearch environment=es_path_conf=/opt/elasticsearch/config environment=pid_dir=/opt/elasticsearch/config environmentfile=/etc/sysconfig/elasticsearch workingdirectory=/opt/elasticsearch user=queylog group=queylog execstart=/opt/elasticsearch/bin/elasticsearch -p ${pid_dir}/elasticsearch.pid # standardoutput is configured to redirect to journalctl since # some error messages may be logged in standard output before # elasticsearch logging system is initialized. elasticsearch # stores its logs in /var/log/elasticsearch and does not use # journalctl by default. if you also want to enable journalctl # logging, you can simply remove the "quiet" option from execstart. standardoutput=journal standarderror=inherit # specifies the maximum file descriptor number that can be opened by this process limitnofile=65536 # specifies the maximum number of process limitnproc=4096 # specifies the maximum size of virtual memory limitas=infinity # specifies the maximum file size limitfsize=infinity # disable timeout logic and wait until process is stopped timeoutstopsec=0 # sigterm signal is used to stop the java process killsignal=sigterm # send the signal only to the jvm rather than its control group killmode=process # java process is never killed sendsigkill=no # when a jvm receives a sigterm signal it exits with code 143 successexitstatus=143 [install] wantedby=multi-user.target vi /etc/sysconfig/elasticsearch elasticsearch # ####################### # elasticsearch home directory es_home=/opt/elasticsearch # elasticsearch java path java_home=/home/liyijie/jdk1.8 classpath=.:$java_home/lib/dt.jar:$java_home/lib/tools.jar:$java_homr/jre/lib # elasticsearch configuration directory es_path_conf=/opt/elasticsearch/config # elasticsearch pid directory pid_dir=/opt/elasticsearch/config ############################# # elasticsearch service # ############################# # sysv init.d # the number of seconds to wait before checking if elasticsearch started successfully as a daemon process es_startup_sleep_time=5 ################################ # elasticsearch properties # ################################ # specifies the maximum file descriptor number that can be opened by this process # when using systemd,this setting is ignored and the limitnofile defined in # /usr/lib/systemd/system/elasticsearch.service takes precedence #max_open_files=65536 # the maximum number of bytes of memory that may be locked into ram # set to "unlimited" if you use the 'bootstrap.memory_lock: true' option # in elasticsearch.yml. # when using systemd,limitmemlock must be set in a unit file such as # /etc/systemd/system/elasticsearch.service.d/override.conf. #max_locked_memory=unlimited # maximum number of vma(virtual memory areas) a process can own # when using systemd,this setting is ignored and the 'vm.max_map_count' # property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf #max_map_count=262144 # 重新加载服务 systemctl daemon-reload #切换普通用户 su queylog #启动elasticsearch sudo systemctl start elasticsearch #设置开机自启动 sudo systemctl enable elasticsearch
3、安装logstash
3.1、logstash配置
su queylog cd /home/queylog #解压 logstash-6.5.4.tar.gz tar -zxvf logstash-6.5.4.tar.gz sudo mv logstash-6.5.4 /opt/logstash #编辑es配置文件 vi /opt/logstash/config/logstash.yml xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: elastic xpack.monitoring.elasticsearch.password: changeme xpack.monitoring.elasticsearch.url: ["http://192.168.8.224:9200"] #在bin目录下创建logstash.conf vi /opt/logstash/bin/logstash.conf input { # 以文件作为来源 file { # 日志文件路径 path => "/opt/tomcat/logs/catalina.out" start_position => "beginning" # (end, beginning) type=> "isp" } } #filter { #定义数据的格式,正则解析日志(根据实际需要对日志日志过滤、收集) #grok { # match => { "message" => "%{ipv4:clientip}|%{greedydata:request}|%{number:duration}"} #} #根据需要对数据的类型转换 #mutate { convert => { "duration" => "integer" }} #} # 定义输出 output { elasticsearch { hosts => "192.168.43.211:9200" #elasticsearch 默认端口 index => "ind" document_type => "isp" } } #给该用户授权 chown queylog:queylog /opt/logstash #启动logstash ./opt/logstash/bin/logstash -f logstash.conf # 安装并配置启动logstash后查看es索引是否创建完成 curl http://192.168.8.224:9200/_cat/indices
4、java代码部分
之前在springboot整合elasticsearch与redis的异常解决
查阅资料,这个归纳的原因比较合理。
原因分析:程序的其他地方使用了netty,这里指redis。这影响在实例化传输客户端之前初始化处理器的数量。 实例化传输客户端时,我们尝试初始化处理器的数量。 由于在其他地方使用netty,因此已经初始化并且netty会对此进行防范,因此首次实例化会因看到的非法状态异常而失败。
解决方案
在springboot启动类中加入:
system.setproperty("es.set.netty.runtime.available.processors", "false");
4.1、引入pom依赖
<dependency> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-data-elasticsearch</artifactid> </dependency>
4.2、修改配置文件
spring.data.elasticsearch.cluster-name=elastic # restapi使用9200 # java程序使用9300 spring.data.elasticsearch.cluster-nodes=192.168.43.211:9300
4.3、对应的接口以及实现类
import org.springframework.data.elasticsearch.annotations.document; import org.springframework.data.elasticsearch.annotations.field; @document(indexname = "ind", type = "isp") public class bean { @field private string message; public string getmessage() { return message; } public void setmessage(string message) { this.message = message; } @override public string tostring() { return "tomcat{" + ", message='" + message + '\'' + '}'; } }
import java.util.map; public interface ielasticsearchservice { map<string, object> search(string keywords, integer currentpage, integer pagesize) throws exception ; //特殊字符转义 default string escape( string s) { stringbuilder sb = new stringbuilder(); for(int i = 0; i < s.length(); ++i) { char c = s.charat(i); if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' || c == ':' || c == '^' || c == '[' || c == ']' || c == '"' || c == '{' || c == '}' || c == '~' || c == '*' || c == '?' || c == '|' || c == '&' || c == '/') { sb.append('\\'); } sb.append(c); } return sb.tostring(); } }
import org.elasticsearch.index.query.boolquerybuilder; import org.elasticsearch.index.query.querybuilders; import org.elasticsearch.search.fetch.subphase.highlight.highlightbuilder; import org.slf4j.logger; import org.slf4j.loggerfactory; import org.springframework.beans.factory.annotation.autowired; import org.springframework.data.domain.pagerequest; import org.springframework.data.elasticsearch.core.elasticsearchtemplate; import org.springframework.data.elasticsearch.core.aggregation.aggregatedpage; import org.springframework.data.elasticsearch.core.query.nativesearchquerybuilder; import org.springframework.stereotype.service; import javax.annotation.resource; import java.util.arraylist; import java.util.hashmap; import java.util.list; import java.util.map; /** * elasticsearch实现类 */ @service public class elasticsearchserviceimpl implements ielasticsearchservice { logger log = loggerfactory.getlogger(elasticsearchserviceimpl.class); @autowired elasticsearchtemplate elasticsearchtemplate; @resource highlightresulthelper highlightresulthelper; @override public map<string, object> search(string keywords, integer currentpage, integer pagesize) { keywords= escape(keywords); currentpage = math.max(currentpage - 1, 0); list<highlightbuilder.field> highlightfields = new arraylist<>(); //设置高亮 把查询到的关键字进行高亮 highlightbuilder.field message = new highlightbuilder.field("message").fragmentoffset(80000).numoffragments(0).requirefieldmatch(false).pretags("<span style='color:red'>").posttags("</span>"); highlightfields.add(message); highlightbuilder.field[] highlightfieldsary = highlightfields.toarray(new highlightbuilder .field[highlightfields.size()]); //创建查询构造器 nativesearchquerybuilder querybuilder = new nativesearchquerybuilder(); //过滤 按字段权重进行搜索 查询内容不为空按关键字、摘要、其他属性权重 boolquerybuilder boolquerybuilder = querybuilders.boolquery(); querybuilder.withpageable(pagerequest.of(currentpage, pagesize)); if (!mystringutils.isempty(keywords)){ boolquerybuilder.must(querybuilders.querystringquery(keywords).field("message")); } querybuilder.withquery(boolquerybuilder); querybuilder.withhighlightfields(highlightfieldsary); log.info("查询语句:{}", querybuilder.build().getquery().tostring()); //查询 aggregatedpage<bean> result = elasticsearchtemplate.queryforpage(querybuilder.build(), bean .class,highlightresulthelper); //解析结果 long total = result.gettotalelements(); int totalpage = result.gettotalpages(); list<bean> bloglist = result.getcontent(); map<string, object> map = new hashmap<>(); map.put("total", total); map.put("totalpage", totalpage); map.put("pagesize", pagesize); map.put("currentpage", currentpage + 1); map.put("bloglist", bloglist); return map; }
import com.alibaba.fastjson.jsonobject; import org.apache.commons.beanutils.propertyutils; import org.elasticsearch.action.search.searchresponse; import org.elasticsearch.common.text.text; import org.elasticsearch.search.searchhit; import org.elasticsearch.search.fetch.subphase.highlight.highlightfield; import org.slf4j.logger; import org.slf4j.loggerfactory; import org.springframework.data.domain.pageable; import org.springframework.data.elasticsearch.core.searchresultmapper; import org.springframework.data.elasticsearch.core.aggregation.aggregatedpage; import org.springframework.data.elasticsearch.core.aggregation.impl.aggregatedpageimpl; import org.springframework.stereotype.component; import org.springframework.util.stringutils; import java.lang.reflect.invocationtargetexception; import java.util.arraylist; import java.util.list; /** * elasticsearch高亮配置 */ @component public class highlightresulthelper implements searchresultmapper { logger log = loggerfactory.getlogger(highlightresulthelper.class); @override public <t> aggregatedpage<t> mapresults(searchresponse response, class<t> clazz, pageable pageable) { list<t> results = new arraylist<>(); for (searchhit hit : response.gethits()) { if (hit != null) { t result = null; if (stringutils.hastext(hit.getsourceasstring())) { result = jsonobject.parseobject(hit.getsourceasstring(), clazz); } // 高亮查询 for (highlightfield field : hit.gethighlightfields().values()) { try { propertyutils.setproperty(result, field.getname(), concat(field.fragments())); } catch (illegalaccessexception | invocationtargetexception | nosuchmethodexception e) { log.error("设置高亮字段异常:{}", e.getmessage(), e); } } results.add(result); } } return new aggregatedpageimpl<t>(results, pageable, response.gethits().gettotalhits(), response .getaggregations(), response.getscrollid()); } public <t> t mapsearchhit(searchhit searchhit, class<t> clazz) { list<t> results = new arraylist<>(); for (highlightfield field : searchhit.gethighlightfields().values()) { t result = null; if (stringutils.hastext(searchhit.getsourceasstring())) { result = jsonobject.parseobject(searchhit.getsourceasstring(), clazz); } try { propertyutils.setproperty(result, field.getname(), concat(field.fragments())); } catch (illegalaccessexception | invocationtargetexception | nosuchmethodexception e) { log.error("设置高亮字段异常:{}", e.getmessage(), e); } results.add(result); } return null; } private string concat(text[] texts) { stringbuffer sb = new stringbuffer(); for (text text : texts) { sb.append(text.tostring()); } return sb.tostring(); } }
import org.junit.test; import org.junit.runner.runwith; import org.slf4j.logger; import org.slf4j.loggerfactory; import org.springframework.beans.factory.annotation.autowired; import org.springframework.boot.test.context.springboottest; import org.springframework.test.context.junit4.springrunner; @runwith(springrunner.class) @springboottest(classes = cbeiispapplication.class) public class elasticsearchservicetest {w private static logger logger= loggerfactory.getlogger(encodephoneandcardtest.class); @autowired private ielasticsearchservice elasticsearchservice; @test public responsevo getlog(){ try { map<string, object> search = elasticsearchservice.search("exception", 1, 10); logger.info( json.tojsonstring(search)); } catch (exception e) { e.printstacktrace(); } }
例如:以上就是今天要讲的内容,本文仅仅简单介绍了elasticsearch跟logstash的使用, 文章若有不当之处,欢迎评论指出~
到此这篇关于elasticsearch+logstash并使用java代码实现日志检索的文章就介绍到这了,更多相关elasticsearch logstash日志检索内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!