.NET Core下使用Kafka的方法步骤
程序员文章站
2022-03-09 17:21:38
安装centos安装 kafkakafka : zooleeper : 下载并解压# 下载,并解压$ wget https://archive.apache.org/dist/kafka/2.1.1/...
安装
centos安装 kafka
kafka :
zooleeper :
下载并解压
# 下载,并解压 $ wget https://archive.apache.org/dist/kafka/2.1.1/kafka_2.12-2.1.1.tgz $ tar -zxvf kafka_2.12-2.1.1.tgz $ mv kafka_2.12-2.1.1.tgz /data/kafka # 下载 zookeeper,解压 $ wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz $ tar -zxvf apache-zookeeper-3.5.8-bin.tar.gz $ mv apache-zookeeper-3.5.8-bin /data/zookeeper
启动 zookeeper
# 复制配置模版 $ cd /data/kafka/conf $ cp zoo_sample.cfg zoo.cfg # 看看配置需不需要改 $ vim zoo.cfg # 命令 $ ./bin/zkserver.sh start # 启动 $ ./bin/zkserver.sh status # 状态 $ ./bin/zkserver.sh stop # 停止 $ ./bin/zkserver.sh restart # 重启 # 使用客户端测试 $ ./bin/zkcli.sh -server localhost:2181 $ quit
启动 kafka
# 备份配置 $ cd /data/kafka $ cp config/server.properties config/server.properties_copy # 修改配置 $ vim /data/kafka/config/server.properties # 集群配置下,每个 broker 的 id 是必须不同的 # broker.id=0 # 监听地址设置(内网) # listeners=plaintext://ip:9092 # 对外提供服务的ip、端口 # advertised.listeners=plaintext://106.75.84.97:9092 # 修改每个topic的默认分区参数num.partitions,默认是1,具体合适的取值需要根据服务器配置进程确定,ucloud.ukafka = 3 # num.partitions=3 # zookeeper 配置 # zookeeper.connect=localhost:2181 # 通过配置启动 kafka $ ./bin/kafka-server-start.sh config/server.properties& # 状态查看 $ ps -ef|grep kafka $ jps
docker下安装kafka
docker pull wurstmeister/zookeeper docker run -d --name zookeeper -p 2181:2181 wurstmeister/zookeeper
docker pull wurstmeister/kafka docker run -d --name kafka --publish 9092:9092 --link zookeeper --env kafka_zookeeper_connect=zookeeper:2181 --env kafka_advertised_host_name=192.168.1.111 --env kafka_advertised_port=9092 wurstmeister/kafka
介绍
- broker:消息中间件处理节点,一个kafka节点就是一个broker,多个broker可以组成一个kafka集群。
- topic:一类消息,例如page view日志、click日志等都可以以topic的形式存在,kafka集群能够同时负责多个topic的分发。
- partition:topic物理上的分组,一个topic可以分为多个partition,每个partition是一个有序的队列。
- segment:partition物理上由多个segment组成,下面2.2和2.3有详细说明。
- offset:每个partition都由一系列有序的、不可变的消息组成,这些消息被连续的追加到partition中。partition中的每个消息都有一个连续的序列号叫做offset,用于partition唯一标识一条消息。
kafka partition 和 consumer 数目关系
- 如果consumer比partition多是浪费,因为kafka的设计是在一个partition上是不允许并发的,所以consumer数不要大于partition数 。
- 如果consumer比partition少,一个consumer会对应于多个partitions,这里主要合理分配consumer数和partition数,否则会导致partition里面的数据被取的不均匀 。最好partiton数目是consumer数目的整数倍,所以partition数目很重要,比如取24,就很容易设定consumer数目 。
- 如果consumer从多个partition读到数据,不保证数据间的顺序性,kafka只保证在一个partition上数据是有序的,但多个partition,根据你读的顺序会有不同
- 增减consumer,broker,partition会导致rebalance,所以rebalance后consumer对应的partition会发生变化 快速开始
在 .net core 项目中安装组件
install-package confluent.kafka
开源地址:
添加 ikafkaservice
服务接口
public interface ikafkaservice { /// <summary> /// 发送消息至指定主题 /// </summary> /// <typeparam name="tmessage"></typeparam> /// <param name="topicname"></param> /// <param name="message"></param> /// <returns></returns> task publishasync<tmessage>(string topicname, tmessage message) where tmessage : class; /// <summary> /// 从指定主题订阅消息 /// </summary> /// <typeparam name="tmessage"></typeparam> /// <param name="topics"></param> /// <param name="messagefunc"></param> /// <param name="cancellationtoken"></param> /// <returns></returns> task subscribeasync<tmessage>(ienumerable<string> topics, action<tmessage> messagefunc, cancellationtoken cancellationtoken) where tmessage : class; }
实现 ikafkaservice
public class kafkaservice : ikafkaservice { public async task publishasync<tmessage>(string topicname, tmessage message) where tmessage : class { var config = new producerconfig { bootstrapservers = "127.0.0.1:9092" }; using var producer = new producerbuilder<string, string>(config).build(); await producer.produceasync(topicname, new message<string, string> { key = guid.newguid().tostring(), value = message.serializetojson() }); } public async task subscribeasync<tmessage>(ienumerable<string> topics, action<tmessage> messagefunc, cancellationtoken cancellationtoken) where tmessage : class { var config = new consumerconfig { bootstrapservers = "127.0.0.1:9092", groupid = "crow-consumer", enableautocommit = false, statisticsintervalms = 5000, sessiontimeoutms = 6000, autooffsetreset = autooffsetreset.earliest, enablepartitioneof = true }; //const int commitperiod = 5; using var consumer = new consumerbuilder<ignore, string>(config) .seterrorhandler((_, e) => { console.writeline($"error: {e.reason}"); }) .setstatisticshandler((_, json) => { console.writeline($" - {datetime.now:yyyy-mm-dd hh:mm:ss} > 消息监听中.."); }) .setpartitionsassignedhandler((c, partitions) => { string partitionsstr = string.join(", ", partitions); console.writeline($" - 分配的 kafka 分区: {partitionsstr}"); }) .setpartitionsrevokedhandler((c, partitions) => { string partitionsstr = string.join(", ", partitions); console.writeline($" - 回收了 kafka 的分区: {partitionsstr}"); }) .build(); consumer.subscribe(topics); try { while (true) { try { var consumeresult = consumer.consume(cancellationtoken); console.writeline($"consumed message '{consumeresult.message?.value}' at: '{consumeresult?.topicpartitionoffset}'."); if (consumeresult.ispartitioneof) { console.writeline($" - {datetime.now:yyyy-mm-dd hh:mm:ss} 已经到底了:{consumeresult.topic}, partition {consumeresult.partition}, offset {consumeresult.offset}."); continue; } tmessage messageresult = null; try { messageresult = jsonconvert.deserializeobject<tmessage>(consumeresult.message.value); } catch (exception ex) { var errormessage = $" - {datetime.now:yyyy-mm-dd hh:mm:ss}【exception 消息反序列化失败,value:{consumeresult.message.value}】 :{ex.stacktrace?.tostring()}"; console.writeline(errormessage); messageresult = null; } if (messageresult != null/* && consumeresult.offset % commitperiod == 0*/) { messagefunc(messageresult); try { consumer.commit(consumeresult); } catch (kafkaexception e) { console.writeline(e.message); } } } catch (consumeexception e) { console.writeline($"consume error: {e.error.reason}"); } } } catch (operationcanceledexception) { console.writeline("closing consumer."); consumer.close(); } await task.completedtask; } }
注入 ikafkaservice
,在需要使用的地方直接调用即可。
public class messageservice : imessageservice, itransientdependency { private readonly ikafkaservice _kafkaservice; public messageservice(ikafkaservice kafkaservice) { _kafkaservice = kafkaservice; } public async task requesttraceadded(xxxeventdata eventdata) { await _kafkaservice.publishasync(eventdata.topicname, eventdata); } }
以上相当于一个生产者,当我们消息队列发出后,还需一个消费者进行消费,所以可以使用一个控制台项目接收消息来处理业务。
var cts = new cancellationtokensource(); console.cancelkeypress += (_, e) => { e.cancel = true; cts.cancel(); }; await kafkaservice.subscribeasync<xxxeventdata>(topics, async (eventdata) => { // your logic console.writeline($" - {eventdata.eventtime:yyyy-mm-dd hh:mm:ss} 【{eventdata.topicname}】- > 已处理"); }, cts.token);
在 ikafkaservice
中已经写了订阅消息的接口,这里也是注入后直接使用即可。
生产者消费者示例
生产者
static async task main(string[] args) { if (args.length != 2) { console.writeline("usage: .. brokerlist topicname"); // 127.0.0.1:9092 hellotopic return; } var brokerlist = args.first(); var topicname = args.last(); var config = new producerconfig { bootstrapservers = brokerlist }; using var producer = new producerbuilder<string, string>(config).build(); console.writeline("\n-----------------------------------------------------------------------"); console.writeline($"producer {producer.name} producing on topic {topicname}."); console.writeline("-----------------------------------------------------------------------"); console.writeline("to create a kafka message with utf-8 encoded key and value:"); console.writeline("> key value<enter>"); console.writeline("to create a kafka message with a null key and utf-8 encoded value:"); console.writeline("> value<enter>"); console.writeline("ctrl-c to quit.\n"); var cancelled = false; console.cancelkeypress += (_, e) => { e.cancel = true; cancelled = true; }; while (!cancelled) { console.write("> "); var text = string.empty; try { text = console.readline(); } catch (ioexception) { break; } if (string.isnullorwhitespace(text)) { break; } var key = string.empty; var val = text; var index = text.indexof(" "); if (index != -1) { key = text.substring(0, index); val = text.substring(index + 1); } try { var deliveryresult = await producer.produceasync(topicname, new message<string, string> { key = key, value = val }); console.writeline($"delivered to: {deliveryresult.topicpartitionoffset}"); } catch (produceexception<string, string> e) { console.writeline($"failed to deliver message: {e.message} [{e.error.code}]"); } } }
消费者
static void main(string[] args) { if (args.length != 2) { console.writeline("usage: .. brokerlist topicname"); // 127.0.0.1:9092 hellotopic return; } var brokerlist = args.first(); var topicname = args.last(); console.writeline($"started consumer, ctrl-c to stop consuming"); var cts = new cancellationtokensource(); console.cancelkeypress += (_, e) => { e.cancel = true; cts.cancel(); }; var config = new consumerconfig { bootstrapservers = brokerlist, groupid = "consumer", enableautocommit = false, statisticsintervalms = 5000, sessiontimeoutms = 6000, autooffsetreset = autooffsetreset.earliest, enablepartitioneof = true }; const int commitperiod = 5; using var consumer = new consumerbuilder<ignore, string>(config) .seterrorhandler((_, e) => { console.writeline($"error: {e.reason}"); }) .setstatisticshandler((_, json) => { console.writeline($" - {datetime.now:yyyy-mm-dd hh:mm:ss} > monitoring.."); //console.writeline($"statistics: {json}"); }) .setpartitionsassignedhandler((c, partitions) => { console.writeline($"assigned partitions: [{string.join(", ", partitions)}]"); }) .setpartitionsrevokedhandler((c, partitions) => { console.writeline($"revoking assignment: [{string.join(", ", partitions)}]"); }) .build(); consumer.subscribe(topicname); try { while (true) { try { var consumeresult = consumer.consume(cts.token); if (consumeresult.ispartitioneof) { console.writeline($"reached end of topic {consumeresult.topic}, partition {consumeresult.partition}, offset {consumeresult.offset}."); continue; } console.writeline($"received message at {consumeresult.topicpartitionoffset}: {consumeresult.message.value}"); if (consumeresult.offset % commitperiod == 0) { try { consumer.commit(consumeresult); } catch (kafkaexception e) { console.writeline($"commit error: {e.error.reason}"); } } } catch (consumeexception e) { console.writeline($"consume error: {e.error.reason}"); } } } catch (operationcanceledexception) { console.writeline("closing consumer."); consumer.close(); } }
到此这篇关于.net core下使用kafka的方法步骤的文章就介绍到这了,更多相关.net core使用kafka内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!
上一篇: 为什么只有淘宝才是真正在助农?
下一篇: python爬虫爬取网页数据并解析数据