欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Spring boot集成Kafka+Storm的示例代码

程序员文章站 2023-12-18 10:30:16
前言 由于业务需求需要把strom与kafka整合到spring boot项目里,实现其他服务输出日志至kafka订阅话题,storm实时处理该话题完成数据监控及其他数据...

前言

由于业务需求需要把strom与kafka整合到spring boot项目里,实现其他服务输出日志至kafka订阅话题,storm实时处理该话题完成数据监控及其他数据统计,但是网上教程较少,今天想写的就是如何整合storm+kafka 到spring boot,顺带说一说我遇到的坑。

使用工具及环境配置

​ 1. java 版本jdk-1.8

​ 2. 编译工具使用idea-2017

​ 3. maven作为项目管理

​ 4.spring boot-1.5.8.release

需求体现

1.为什么需要整合到spring boot

为了使用spring boot 统一管理各种微服务,及同时避免多个分散配置

2.具体思路及整合原因

​ 使用spring boot统一管理kafka、storm、redis等所需要的bean,通过其他服务日志收集至kafka,kafka实时发送日志至storm,在strom bolt时进行相应的处理操作

遇到的问题

​ 1.使用spring boot并没有相关整合storm

​ 2.以spring boot启动方式不知道如何触发提交topolgy

​ 3.提交topology时遇到numbis not client localhost 问题

​ 4.storm bolt中无法通过注解获得实例化bean进行相应的操作

解决思路

在整合之前我们需要知道相应的spring boot 的启动方式及配置(如果你在阅读本文时,默认你已经对storm,kafka及spring boot有相关了解及使用)

spring boot 对storm进行整合的例子在网上很少,但是因为有相应的需求,因此我们还是需要整合.

首先导入所需要jar包:

<dependency>
 <groupid>org.apache.kafka</groupid>
 <artifactid>kafka-clients</artifactid>
 <version>0.10.1.1</version>
 </dependency>

 <dependency>
 <groupid>org.springframework.cloud</groupid>
 <artifactid>spring-cloud-starter-stream-kafka</artifactid>
 <exclusions>
 <exclusion>
  <artifactid>zookeeper</artifactid>
  <groupid>org.apache.zookeeper</groupid>
 </exclusion>
 <exclusion>
  <artifactid>spring-boot-actuator</artifactid>
  <groupid>org.springframework.boot</groupid>
 </exclusion>
 <exclusion>
  <artifactid>kafka-clients</artifactid>
  <groupid>org.apache.kafka</groupid>
 </exclusion>
 </exclusions>
 </dependency>

 <dependency>
 <groupid>org.springframework.kafka</groupid>
 <artifactid>spring-kafka</artifactid>
 <exclusions>
 <exclusion>
  <artifactid>kafka-clients</artifactid>
  <groupid>org.apache.kafka</groupid>
 </exclusion>
 </exclusions>
 </dependency>

 <dependency>
 <groupid>org.springframework.data</groupid>
 <artifactid>spring-data-hadoop</artifactid>
 <version>2.5.0.release</version>
 <exclusions>
 <exclusion>
  <groupid>org.slf4j</groupid>
  <artifactid>slf4j-log4j12</artifactid>
 </exclusion>
 <exclusion>
  <artifactid>commons-logging</artifactid>
  <groupid>commons-logging</groupid>
 </exclusion>
 <exclusion>
  <artifactid>netty</artifactid>
  <groupid>io.netty</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-core-asl</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>curator-client</artifactid>
  <groupid>org.apache.curator</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jettison</artifactid>
  <groupid>org.codehaus.jettison</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-mapper-asl</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-jaxrs</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>snappy-java</artifactid>
  <groupid>org.xerial.snappy</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-xc</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>guava</artifactid>
  <groupid>com.google.guava</groupid>
 </exclusion>
 <exclusion>
  <artifactid>hadoop-mapreduce-client-core</artifactid>
  <groupid>org.apache.hadoop</groupid>
 </exclusion>
 <exclusion>
  <artifactid>zookeeper</artifactid>
  <groupid>org.apache.zookeeper</groupid>
 </exclusion>
 <exclusion>
  <artifactid>servlet-api</artifactid>
  <groupid>javax.servlet</groupid>
 </exclusion>

 </exclusions>
 </dependency>
 <dependency>
 <groupid>org.apache.zookeeper</groupid>
 <artifactid>zookeeper</artifactid>
 <version>3.4.10</version>
 <exclusions>
 <exclusion>
  <artifactid>slf4j-log4j12</artifactid>
  <groupid>org.slf4j</groupid>
 </exclusion>
 </exclusions>
 </dependency>
 <dependency>
 <groupid>org.apache.hbase</groupid>
 <artifactid>hbase-client</artifactid>
 <version>1.2.4</version>
 <exclusions>
 <exclusion>
  <artifactid>log4j</artifactid>
  <groupid>log4j</groupid>
 </exclusion>
 <exclusion>
  <artifactid>zookeeper</artifactid>
  <groupid>org.apache.zookeeper</groupid>
 </exclusion>
 <exclusion>
  <artifactid>netty</artifactid>
  <groupid>io.netty</groupid>
 </exclusion>
 <exclusion>
  <artifactid>hadoop-common</artifactid>
  <groupid>org.apache.hadoop</groupid>
 </exclusion>
 <exclusion>
  <artifactid>guava</artifactid>
  <groupid>com.google.guava</groupid>
 </exclusion>
 <exclusion>
  <artifactid>hadoop-annotations</artifactid>
  <groupid>org.apache.hadoop</groupid>
 </exclusion>
 <exclusion>
  <artifactid>hadoop-yarn-common</artifactid>
  <groupid>org.apache.hadoop</groupid>
 </exclusion>
 <exclusion>
  <artifactid>slf4j-log4j12</artifactid>
  <groupid>org.slf4j</groupid>
 </exclusion>
 </exclusions>
 </dependency>
 <dependency>
 <groupid>org.apache.hadoop</groupid>
 <artifactid>hadoop-common</artifactid>
 <version>2.7.3</version>
 <exclusions>
 <exclusion>
  <artifactid>commons-logging</artifactid>
  <groupid>commons-logging</groupid>
 </exclusion>
 <exclusion>
  <artifactid>curator-client</artifactid>
  <groupid>org.apache.curator</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-mapper-asl</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>jackson-core-asl</artifactid>
  <groupid>org.codehaus.jackson</groupid>
 </exclusion>
 <exclusion>
  <artifactid>log4j</artifactid>
  <groupid>log4j</groupid>
 </exclusion>
 <exclusion>
  <artifactid>snappy-java</artifactid>
  <groupid>org.xerial.snappy</groupid>
 </exclusion>
 <exclusion>
  <artifactid>zookeeper</artifactid>
  <groupid>org.apache.zookeeper</groupid>
 </exclusion>
 <exclusion>
  <artifactid>guava</artifactid>
  <groupid>com.google.guava</groupid>
 </exclusion>
 <exclusion>
  <artifactid>hadoop-auth</artifactid>
  <groupid>org.apache.hadoop</groupid>
 </exclusion>
 <exclusion>
  <artifactid>commons-lang</artifactid>
  <groupid>commons-lang</groupid>
 </exclusion>
 <exclusion>
  <artifactid>slf4j-log4j12</artifactid>
  <groupid>org.slf4j</groupid>
 </exclusion>
 <exclusion>
  <artifactid>servlet-api</artifactid>
  <groupid>javax.servlet</groupid>
 </exclusion>
 </exclusions>
 </dependency>
 <dependency>
 <groupid>org.apache.hadoop</groupid>
 <artifactid>hadoop-mapreduce-examples</artifactid>
 <version>2.7.3</version>
 <exclusions>
 <exclusion>
  <artifactid>commons-logging</artifactid>
  <groupid>commons-logging</groupid>
 </exclusion>
 <exclusion>
  <artifactid>netty</artifactid>
  <groupid>io.netty</groupid>
 </exclusion>
 <exclusion>
  <artifactid>guava</artifactid>
  <groupid>com.google.guava</groupid>
 </exclusion>
 <exclusion>
  <artifactid>log4j</artifactid>
  <groupid>log4j</groupid>
 </exclusion>
 <exclusion>
  <artifactid>servlet-api</artifactid>
  <groupid>javax.servlet</groupid>
 </exclusion>
 </exclusions>
 </dependency>

 <!--storm-->
 <dependency>
 <groupid>org.apache.storm</groupid>
 <artifactid>storm-core</artifactid>
 <version>${storm.version}</version>
 <scope>${provided.scope}</scope>
 <exclusions>
 <exclusion>
  <groupid>org.apache.logging.log4j</groupid>
  <artifactid>log4j-slf4j-impl</artifactid>
 </exclusion>
 <exclusion>
  <artifactid>servlet-api</artifactid>
  <groupid>javax.servlet</groupid>
 </exclusion>
 </exclusions>
 </dependency>

 <dependency>
 <groupid>org.apache.storm</groupid>
 <artifactid>storm-kafka</artifactid>
 <version>1.1.1</version>
 <exclusions>
 <exclusion>
  <artifactid>kafka-clients</artifactid>
  <groupid>org.apache.kafka</groupid>
 </exclusion>
 </exclusions>
 </dependency>

其中去除jar包是因为需要相与项目构建依赖有多重依赖问题,storm版本为1.1.0  spring boot相关依赖为

```java

<!-- spring boot -->
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-starter</artifactid>
   <exclusions>
    <exclusion>
     <groupid>org.springframework.boot</groupid>
     <artifactid>spring-boot-starter-logging</artifactid>
    </exclusion>
   </exclusions>
  </dependency>
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-starter-web</artifactid>
  </dependency>
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-starter-aop</artifactid>
  </dependency>
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-starter-test</artifactid>
   <scope>test</scope>
  </dependency>
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-starter-log4j2</artifactid>
  </dependency>
  <dependency>
   <groupid>org.mybatis.spring.boot</groupid>
   <artifactid>mybatis-spring-boot-starter</artifactid>
   <version>${mybatis-spring.version}</version>
  </dependency>
  <dependency>
   <groupid>org.springframework.boot</groupid>
   <artifactid>spring-boot-configuration-processor</artifactid>
   <optional>true</optional>
  </dependency>

ps:maven的jar包仅因为项目使用需求,不是最精简,仅供大家参考.

项目结构:

config-存储不同环境配置文件

Spring boot集成Kafka+Storm的示例代码

存储构建spring boot 相关实现类 其他如构建名

启动spring boot的时候我们会发现

其实开始整合前,对storm了解的较少,属于刚开始没有接触过,后面参考发现整合到spring boot里面启动spring boot之后并没有相应的方式去触发提交topolgy的函数,所以也造成了以为启动spring boot之后就完事了结果等了半个小时什么事情都没发生才发现没有实现触发提交函数.

为了解决这个问题我的想法是: 启动spring boot->创建kafka监听topic然后启动topolgy完成启动,可是这样的问题kafka监听这个主题会重复触发topolgy,这明显不是我们想要的.看了一会后发现spring 有相关启动完成之后执行某个时间方法,这个对我来说简直是救星啊.所以现在触发topolgy的思路变为:

启动spring boot ->执行触发方法->完成相应的触发条件

构建方法为:

/**
 * @author leezer
 * @date 2017/12/28
 * spring加载完后自动自动提交topology
 **/
@configuration
@component
public class autoload implements applicationlistener<contextrefreshedevent> {

 private static string brokerzkstr;
 private static string topic;
 private static string host;
 private static string port;
 public autoload(@value("${storm.brokerzkstr}") string brokerzkstr,
     @value("${zookeeper.host}") string host,
     @value("${zookeeper.port}") string port,
     @value("${kafka.default-topic}") string topic
 ){
  brokerzkstr = brokerzkstr;
  host= host;
  topic= topic;
  port= port;
 }

 @override
 public void onapplicationevent(contextrefreshedevent event) {
  try {
   //实例化topologybuilder类。
   topologybuilder topologybuilder = new topologybuilder();
   //设置喷发节点并分配并发数,该并发数将会控制该对象在集群中的线程数。
   brokerhosts brokerhosts = new zkhosts(brokerzkstr);
   // 配置kafka订阅的topic,以及zookeeper中数据节点目录和名字
   spoutconfig spoutconfig = new spoutconfig(brokerhosts, topic, "/storm", "s32");
   spoutconfig.scheme = new schemeasmultischeme(new stringscheme());
   spoutconfig.zkservers = collections.singletonlist(host);
   spoutconfig.zkport = integer.parseint(port);
   //从kafka最新输出日志读取
   spoutconfig.startoffsettime = offsetrequest.latesttime();
   kafkaspout receiver = new kafkaspout(spoutconfig);
   topologybuilder.setspout("kafka-spout", receiver, 1).setnumtasks(2);
   topologybuilder.setbolt("alarm-bolt", new alarmbolt(), 1).setnumtasks(2).shufflegrouping("kafka-spout");
   config config = new config();
   config.setdebug(false);
   /*设置该topology在storm集群中要抢占的资源slot数,一个slot对应这supervisor节点上的以个worker进程,如果你分配的spot数超过了你的物理节点所拥有的worker数目的话,有可能提交不成功,加入你的集群上面已经有了一些topology而现在还剩下2个worker资源,如果你在代码里分配4个给你的topology的话,那么这个topology可以提交但是提交以后你会发现并没有运行。 而当你kill掉一些topology后释放了一些slot后你的这个topology就会恢复正常运行。
   */
   config.setnumworkers(1);
   localcluster cluster = new localcluster();
   cluster.submittopology("kafka-spout", config, topologybuilder.createtopology());
  } catch (exception e) {
   e.printstacktrace();
  }
 }
}

​ 注:

启动项目时因为使用的是内嵌tomcat进行启动,可能会报如下错误

[tomcat-startstop-1] error o.a.c.c.containerbase - a child container failed during start
java.util.concurrent.executionexception: org.apache.catalina.lifecycleexception: failed to start component [standardengine[tomcat].standardhost[localhost].tomcatembeddedcontext[]]
 at java.util.concurrent.futuretask.report(futuretask.java:122) ~[?:1.8.0_144]
 at java.util.concurrent.futuretask.get(futuretask.java:192) ~[?:1.8.0_144]
 at org.apache.catalina.core.containerbase.startinternal(containerbase.java:939) [tomcat-embed-core-8.5.23.jar:8.5.23]
 at org.apache.catalina.core.standardhost.startinternal(standardhost.java:872) [tomcat-embed-core-8.5.23.jar:8.5.23]
 at org.apache.catalina.util.lifecyclebase.start(lifecyclebase.java:150) [tomcat-embed-core-8.5.23.jar:8.5.23]
 at org.apache.catalina.core.containerbase$startchild.call(containerbase.java:1419) [tomcat-embed-core-8.5.23.jar:8.5.23]
 at org.apache.catalina.core.containerbase$startchild.call(containerbase.java:1409) [tomcat-embed-core-8.5.23.jar:8.5.23]
 at java.util.concurrent.futuretask.run$$$capture(futuretask.java:266) [?:1.8.0_144]
 at java.util.concurrent.futuretask.run(futuretask.java) [?:1.8.0_144]
 at java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1149) [?:1.8.0_144]
 at java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:624) [?:1.8.0_144]
 at java.lang.thread.run(thread.java:748) [?:1.8.0_144]

这是因为有相应导入的jar包引入了servlet-api版本低于内嵌版本,我们需要做的就是打开maven依赖把其去除

<exclusion>
 <artifactid>servlet-api</artifactid>
 <groupid>javax.servlet</groupid>
</exclusion>

然后重新启动就可以了.

启动过程中还有可能报:

复制代码 代码如下:

org.apache.storm.utils.nimbusleadernotfoundexception: could not find leader nimbus from seed hosts [localhost]. did you specify a valid list of nimbus hosts for config nimbus.seeds?at org.apache.storm.utils.nimbusclient.getconfiguredclientas(nimbusclient.java:90

这个问题我思考了很久,发现网上的解释都是因为storm配置问题导致不对,可是我的storm是部署在服务器上的.并没有相关的配置,按理也应该去服务器上读取相关配置,可是结果并不是这样的。最后尝试了几个做法发现都不对,这里才发现,在构建集群的时候storm提供了相应的本地集群

localcluster cluster = new localcluster();

进行本地测试,如果在本地测试就使用其进行部署测试,如果部署到服务器上需要把:

cluster.submittopology("kafka-spout", config, topologybuilder.createtopology());
//修正为:
stormsubmitter.submittopology("kafka-spout", config, topologybuilder.createtopology());

进行任务提交;

以上解决了上面所述的问题1-3

问题4:是在bolt中使用相关bean实例,我发现我把其使用@component加入spring中也无法获取到实例:我的猜想是在我们构建提交topolgy的时候,它会在:

复制代码 代码如下:

topologybuilder.setbolt("alarm-bolt",new alarmbolt(),1).setnumtasks(2).shufflegrouping("kafka-spout");

执行bolt相关:

@override
 public void prepare(map stormconf, topologycontext context,
      outputcollector collector) {
  this.collector = collector;
  stormlauncher stormlauncher = stormlauncher.getstormlauncher();
  datarepositorys =(alarmdatarepositorys)   stormlauncher.getbean("alarmdatarepositorys");
 }

而不会实例化bolt,导致线程不一而spring 获取不到.(这里我也不是太明白,如果有大佬知道可以分享一波)

而我们使用spring boot的意义就在于这些获取这些繁杂的对象,这个问题困扰了我很久.最终想到,我们可以通过上下文getbean获取实例不知道能不能行,然后我就开始了定义:

例如我需要在bolt中使用一个服务:

/**
 * @author leezer
 * @date 2017/12/27
 * 存储操作失败时间
 **/
@service("alarmdatarepositorys")
public class alarmdatarepositorys extends redisbase implements ialarmdatarepositorys {
 private static final string erro = "erro";
 /**
  * @param type 类型
  * @param key key值
  * @return 错误次数
  **/
 @override
 public string geterrnumfromredis(string type,string key) {
  if(type==null || key == null){
   return null;
  }else {
   valueoperations<string, string> valueoper = primarystringredistemplate.opsforvalue();
   return valueoper.get(string.format("%s:%s:%s",erro,type,key));
  }
 }

 /**
  * @param type 错误类型
  * @param key key值
  * @param value 存储值
  **/
 @override
 public void seterrnumtoredis(string type, string key,string value) {
  try {
   valueoperations<string, string> valueoper = primarystringredistemplate.opsforvalue();
   valueoper.set(string.format("%s:%s:%s", erro,type, key), value, dictionaries.apikeydayoflifecycle, timeunit.seconds);
  }catch (exception e){
   logger.info(dictionaries.redis_error_prefix+string.format("key为%s存入redis失败",key));
  }
 }

这里我指定了该bean的名称,则在bolt执行prepare时:使用getbean方法获取了相关bean就能完成相应的操作.

然后kafka订阅主题发送至我bolt进行相关的处理.而这里getbean的方法是在启动bootmain函数定义:

@springbootapplication
@enabletransactionmanagement
@componentscan({"service","storm"})
@enablemongorepositories(basepackages = {"storm"})
@propertysource(value = {"classpath:service.properties", "classpath:application.properties","classpath:storm.properties"})
@importresource(locations = {
 "classpath:/configs/spring-hadoop.xml",
 "classpath:/configs/spring-hbase.xml"})
public class stormlauncher extends springbootservletinitializer {
 //设置 安全线程launcher实例
 private volatile static stormlauncher stormlauncher;
 //设置上下文
 private applicationcontext context;
 public static void main(string[] args) {
  springapplicationbuilder application = new springapplicationbuilder(stormlauncher.class);
  // application.web(false).run(args);该方式是spring boot不以web形式启动
 application.run(args);
 stormlauncher s = new stormlauncher();
 s.setapplicationcontext(application.context());
 setstormlauncher(s);
 }

 private static void setstormlauncher(stormlauncher stormlauncher) {
 stormlauncher.stormlauncher = stormlauncher;
 }
 public static stormlauncher getstormlauncher() {
 return stormlauncher;
 }

 @override
 protected springapplicationbuilder configure(springapplicationbuilder application) {
 return application.sources(stormlauncher.class);
 }

 /**
 * 获取上下文
 *
 * @return the application context
 */
 public applicationcontext getapplicationcontext() {
 return context;
 }

 /**
 * 设置上下文.
 *
 * @param appcontext 上下文
 */
 private void setapplicationcontext(applicationcontext appcontext) {
 this.context = appcontext;
 }

 /**
 * 通过自定义name获取 实例 bean.
 *
 * @param name the name
 * @return the bean
 */
 public object getbean(string name) {
 return context.getbean(name);
 }

 /**
 * 通过class获取bean.
 *
 * @param <t> the type parameter
 * @param clazz the clazz
 * @return the bean
 */
 public <t> t getbean(class<t> clazz) {
 return context.getbean(clazz);
 }

 /**
 * 通过name,以及clazz返回指定的bean
 *
 * @param <t> the type parameter
 * @param name the name
 * @param clazz the clazz
 * @return the bean
 */
 public <t> t getbean(string name, class<t> clazz) {
 return context.getbean(name, clazz);
 }

到此集成storm 和kafka至spring boot已经结束了,相关kafka及其他配置我会放入github上面

对了这里还有一个kafkaclient的坑:

async loop died! java.lang.nosuchmethoderror: org.apache.kafka.common.network.networksend.

项目会报kafka client 问题,这是因为storm-kafka中,kafka使用的是0.8版本,而networksend是0.9以上的版本,这里集成需要与你集成的kafka相关版本一致.

虽然集成比较简单,但是参考都比较少,加之刚开始接触storm所以思考比较多,也在这记录一下.

项目地址 -

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。

上一篇:

下一篇: