欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

KOA+egg.js集成kafka消息队列的示例

程序员文章站 2022-04-06 09:41:24
egg.js : 基于koa2的企业级框架 kafka:高吞吐量的分布式发布订阅消息系统 本文章将集成egg + kafka + mysql 的日志系统例子 系统要求...

egg.js : 基于koa2的企业级框架

kafka:高吞吐量的分布式发布订阅消息系统

本文章将集成egg + kafka + mysql 的日志系统例子

系统要求:日志记录,通过kafka进行消息队列控制

思路图:

KOA+egg.js集成kafka消息队列的示例

这里消费者和生产者都由日志系统提供

λ.1 环境准备

①kafka

官网下载kafka后,解压

启动zookeeper:

bin/zookeeper-server-start.sh config/zookeeper.properties

启动kafka server

这里config/server.properties中将num.partitions=5,我们设置5个partitions

bin/kafka-server-start.sh config/server.properties

② egg + mysql

根据脚手架搭建好egg,再多安装kafka-node,egg-mysql

mysql 用户名root 密码123456

λ.2 集成

1、根目录新建app.js,这个文件在每次项目加载时候都会运作

'use strict';
 
const kafka = require('kafka-node');
 
module.exports = app => {
 app.beforestart(async () => {
 const ctx = app.createanonymouscontext();
 
 const producer = kafka.producer;
 const client = new kafka.kafkaclient({ kafkahost: app.config.kafkahost });
 const producer = new producer(client, app.config.producerconfig);
 
 producer.on('error', function(err) {
  console.error('error: [producer] ' + err);
 });
 
 app.producer = producer;
 
 const consumer = new kafka.consumer(client, app.config.consumertopics, {
  autocommit: false,
 });
 
 consumer.on('message', async function(message) {
  try {
  await ctx.service.log.insert(json.parse(message.value));
  consumer.commit(true, (err, data) => {
   console.error('commit:', err, data);
  });
  } catch (error) {
  console.error('error: [getmessage] ', message, error);
  }
 });
 
 consumer.on('error', function(err) {
  console.error('error: [consumer] ' + err);
 });
 });
};

上述代码新建了生产者、消费者。

生产者新建后加载进app全局对象。我们将在请求时候生产消息。这里只是先新建实例

消费者获取消息将访问service层的insert方法(数据库插入数据)。

具体参数可以参考kafka-node官方api,往下看会有生产者和消费者的配置参数。

2、controller · log.js

这里获取到了producer,并传往service层

'use strict';
 
const controller = require('egg').controller;
 
class logcontroller extends controller {
 /**
 * @description kafka控制日志信息流
 * @host /log/notice
 * @method post
 * @param {log} log 日志信息
 */
 async notice() {
 const producer = this.ctx.app.producer;
 const response = new this.ctx.app.response();
 
 const requestbody = this.ctx.request.body;
 const backinfo = await this.ctx.service.log.send(producer, requestbody);
 this.ctx.body = response.success(backinfo);
 }
}
 
module.exports = logcontroller;

3、service · log.js

这里有一个send方法,这里调用了producer.send ,进行生产者生产

insert方法则是数据库插入数据

'use strict';
 
const service = require('egg').service;
const uuidv1 = require('uuid/v1');
 
class logservice extends service {
 async send(producer, params) {
 const payloads = [
  {
  topic: this.ctx.app.config.topic,
  messages: json.stringify(params),
  },
 ];
 
 producer.send(payloads, function(err, data) {
  console.log('send : ', data);
 });
 
 return 'success';
 }
 async insert(message) {
 try {
  const logdb = this.ctx.app.mysql.get('log');
  const ip = this.ctx.ip;
 
  const logs = this.ctx.model.log.build({
  id: uuidv1(),
  type: message.type || '',
  level: message.level || 0,
  operator: message.operator || '',
  content: message.content || '',
  ip,
  user_agent: message.user_agent || '',
  error_stack: message.error_stack || '',
  url: message.url || '',
  request: message.request || '',
  response: message.response || '',
  created_at: new date(),
  updated_at: new date(),
  });
 
  const result = await logdb.insert('logs', logs.datavalues);
 
  if (result.affectedrows === 1) {
  console.log(`suceess: [insert ${message.type}]`);
  } else console.error('error: [insert db] ', result);
 } catch (error) {
  console.error('error: [insert] ', message, error);
 }
 }
}
 
module.exports = logservice;

4、config · config.default.js

一些上述代码用到的配置参数具体在这里,注这里开了5个partition。

'use strict';
 
module.exports = appinfo => {
 const config = (exports = {});
 
 const topic = 'logaction_p5';
 
 // add your config here
 config.middleware = [];
 
 config.security = {
 csrf: {
  enable: false,
 },
 };
 
 // mysql database configuration
 config.mysql = {
 clients: {
  basic: {
  host: 'localhost',
  port: '3306',
  user: 'root',
  password: '123456',
  database: 'merchants_basic',
  },
  log: {
  host: 'localhost',
  port: '3306',
  user: 'root',
  password: '123456',
  database: 'merchants_log',
  },
 },
 default: {},
 app: true,
 agent: false,
 };
 
 // sequelize config
 config.sequelize = {
 dialect: 'mysql',
 database: 'merchants_log',
 host: 'localhost',
 port: '3306',
 username: 'root',
 password: '123456',
 dialectoptions: {
  requesttimeout: 999999,
 },
 pool: {
  acquire: 999999,
 },
 };
 
 // kafka config
 config.kafkahost = 'localhost:9092';
 
 config.topic = topic;
 
 config.producerconfig = {
 // partitioner type (default = 0, random = 1, cyclic = 2, keyed = 3, custom = 4), default 0
 partitionertype: 1,
 };
 
 config.consumertopics = [
 { topic, partition: 0 },
 { topic, partition: 1 },
 { topic, partition: 2 },
 { topic, partition: 3 },
 { topic, partition: 4 },
 ];
 
 return config;
};

5、实体类:

mode · log.js

这里使用了 sequelize

'use strict';
 
module.exports = app => {
 const { string, integer, date, text } = app.sequelize;
 
 const log = app.model.define('log', {
 /**
  * uuid
  */
 id: { type: string(36), primarykey: true },
 /**
  * 日志类型
  */
 type: string(100),
 /**
  * 优先等级(数字越高,优先级越高)
  */
 level: integer,
 /**
  * 操作者
  */
 operator: string(50),
 /**
  * 日志内容
  */
 content: text,
 /**
  * ip
  */
 ip: string(36),
 /**
  * 当前用户代理信息
  */
 user_agent: string(150),
 /**
  * 错误堆栈
  */
 error_stack: text,
 /**
  * url
  */
 url: string(255),
 /**
  * 请求对象
  */
 request: text,
 /**
  * 响应对象
  */
 response: text,
 /**
  * 创建时间
  */
 created_at: date,
 /**
  * 更新时间
  */
 updated_at: date,
 });
 
 return log;
};

6、测试python脚本:

import requests
 
from multiprocessing import pool
from threading import thread
 
from multiprocessing import process
 
 
def loop():
 t = 1000
 while t:
  url = "http://localhost:7001/log/notice"
 
  payload = "{\n\t\"type\": \"error\",\n\t\"level\": 1,\n\t\"content\": \"url send error\",\n\t\"operator\": \"knove\"\n}"
  headers = {
  'content-type': "application/json",
  'cache-control': "no-cache"
  }
 
  response = requests.request("post", url, data=payload, headers=headers)
 
  print(response.text)
 
if __name__ == '__main__':
 for i in range(10):
  t = thread(target=loop)
  t.start()

7、建表语句:

set names utf8mb4;
set foreign_key_checks = 0;
 
-- ----------------------------
-- table structure for logs
-- ----------------------------
drop table if exists `logs`;
create table `logs` (
 `id` varchar(36) character set utf8mb4 collate utf8mb4_bin not null,
 `type` varchar(100) character set utf8mb4 collate utf8mb4_bin not null comment '日志类型',
 `level` int(11) null default null comment '优先等级(数字越高,优先级越高)',
 `operator` varchar(50) character set utf8mb4 collate utf8mb4_bin null default null comment '操作人',
 `content` text character set utf8mb4 collate utf8mb4_bin null comment '日志信息',
 `ip` varchar(36) character set utf8mb4 collate utf8mb4_bin null default null comment 'ip\r\nip',
 `user_agent` varchar(150) character set utf8mb4 collate utf8mb4_bin null default null comment '当前用户代理信息',
 `error_stack` text character set utf8mb4 collate utf8mb4_bin null comment '错误堆栈',
 `url` varchar(255) character set utf8mb4 collate utf8mb4_bin null default null comment '当前url',
 `request` text character set utf8mb4 collate utf8mb4_bin null comment '请求对象',
 `response` text character set utf8mb4 collate utf8mb4_bin null comment '响应对象',
 `created_at` datetime(0) null default null comment '创建时间',
 `updated_at` datetime(0) null default null comment '更新时间',
 primary key (`id`) using btree
) engine = innodb character set = utf8mb4 collate = utf8mb4_bin row_format = dynamic;
 
set foreign_key_checks = 1;

λ.3 后话

网上类似资料甚少,啃各种文档,探寻技术实现方式

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。