欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kafka--基础知识点--14--kafka部署

程序员文章站 2022-07-14 11:09:45
...

此处使用docker-compose部署,因此前提是安装好docker和docker-compose

1 单机部署

1.1 kafka-single

----kafka-single
  ----docker-compose.yml

1.2 docker-compose.yml

version: "3"
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka 
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11  # IP是宿主机IP,不是kafka容器IP
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
        - /var/run/docker.sock:/var/run/docker.sock

1.3 启动

[[email protected] zookeeper-cluster]# docker-compose ps

1.4 测试是否启动成功

1.4.1 使用zkCli.sh测试

在本地配置zookeeper,但并不在本地启动zookeeper,主要是为了使用zkCli.sh查看kafka集群是否启动成功。zookeeper本地配置中的1.1-1.4。

(base) [[email protected] kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[__consumer_offsets, test]
[zk: localhost:2181(CONNECTED) 3] 

14.2 使用客户端命令测试

查看topic列表

==使用生产者和消费者测试
kafka_2.12-2.2.0目录是通过直接下载kafka_2.12-2.2.0.tgz再解压即可。

查看topic列表

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [[email protected] bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

使用生产者和消费者测试

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

./kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
>def
>opq

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [[email protected] bin]# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
abc
def
opq

2 集群部署

单台物理机上使用docker启动3个zk容器和3个kafka容器

2.1 kafka-cluster目录结构

----kafka-cluster
  ----docker-compose.yml
  ----kafka
    ----kafka1
    ----kafka2
    ----kafka3
  ----zookeeper
    ----zoo1
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
    ----zoo2
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
    ----zoo3
      ----data
        ----myid
      ----datalog
      ----config
        ----zoo.cfg
  ----kafka_2.12-2.2.0

2.2 网络配置

zookeeper myid ip 端口映射
zoo1 1 172.16.238.11 2181:2181
zoo2 2 172.16.238.12 2182:2181
zoo3 3 172.16.238.13 2183:2181
kafka broker_id ip 端口映射
kafka1 1 172.16.238.21 9092:9092
kafka2 2 172.16.238.22 9093:9092
kafka3 3 172.16.238.23 9094:9092

2.3 配置文件

2.3.1 docker-compose.yml

version: '3.1'

services:
  zoo1:
    image: zookeeper
#    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.10
    volumes:
      - ./zookeeper/zoo1/data:/data
      - ./zookeeper/zoo1/datalog:/data/log
      - ./zookeeper/zoo1/config:/conf
  zoo2:
    image: zookeeper
#    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.11
    volumes:
      - ./zookeeper/zoo2/data:/data
      - ./zookeeper/zoo2/datalog:/data/log
      - ./zookeeper/zoo2/config:/conf
  zoo3:
    image: zookeeper
#    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    networks:
      mynet:
        ipv4_address: 172.16.238.12
    volumes:
      - ./zookeeper/zoo3/data:/data
      - ./zookeeper/zoo3/datalog:/data/log
      - ./zookeeper/zoo3/config:/conf
  kafka1:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka1
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11  # IP是宿主机IP,不是kafka1容器IP
      KAFKA_CREATE_TOPICS: "test:1:1"  # 启动后会自动创建一个1个分区1个副本的主题,"主题:分区数:副本数"
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka1:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.21
  kafka2:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka2
    container_name: kafka2
    ports:
      - 9093:9092
    environment:
      KAFKA_BROKER_ID: 2 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka2:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.22
  kafka3:
    image: wurstmeister/kafka
#    restart: always
    hostname: kafka3
    container_name: kafka3
    ports:
      - 9094:9092
    environment:
      KAFKA_BROKER_ID: 3 
      KAFKA_ADVERTISED_HOST_NAME: 10.0.8.11 
      KAFKA_MESSAGE_MAX_BYTES: 2000000 #单条消息最大字节数
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
    volumes:
       - /var/run/docker.sock:/var/run/docker.sock
       - ./kafka/kafka3:/kafka
    depends_on:
      - zoo1
      - zoo2
      - zoo3
    networks:
      mynet:
        ipv4_address: 172.16.238.23
networks:
  mynet:
    driver: bridge
    ipam:
      driver: default
      config:
        - 
          subnet: 172.16.238.0/24
          gateway: 172.16.238.1

2.3.2 zoo1/config/zoo.cfg、zoo2/config/zoo.cfg、zoo3/config/zoo.cfg

这三个配置文件内容相同

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
clientPort=2181
server.1=172.16.238.10:2888:3888
server.2=172.16.238.11:2888:3888
server.3=172.16.238.12:2888:3888
4lw.commands.whitelist=*

2.3.3 myid

zoo1/data/myid

1

zoo2/data/myid

2

zoo3/data/myid

3

2.4 启动

[[email protected] zookeeper-cluster]# docker-compose ps

2.5 测试是否启动成功

2.5.1 使用zkCli.sh测试

在本地配置zookeeper,但并不在本地启动zookeeper,主要是为了使用zkCli.sh查看kafka集群是否启动成功。zookeeper本地配置中的1.1-1.4。

启动一个终端窗口,通过zoo1测试

(base) [[email protected] kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[__consumer_offsets, test]
[zk: localhost:2181(CONNECTED) 3] 

另启动一个终端窗口,通过zoo2测试

(base) [[email protected] kafka-cluster]# zkCli.sh
Connecting to localhost:2181
...
...
...
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] close
...
...
...
[zk: localhost:2181(CONNECTED) 1] connect localhost:2182
...
...
...
[zk: localhost:2182(CONNECTED) 2] get /cluster/id
{"version":"1","id":"_LrguUKmSSy9Mowvn3VAxQ"}
[zk: localhost:2181(CONNECTED) 3] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/topics
[__consumer_offsets, test]

另启动一个终端窗口,通过zoo3测试

通过zoo3测试与通过zoo2测试的方法相同。

2.5.2 使用客户端命令测试

kafka_2.12-2.2.0目录是通过直接下载kafka_2.12-2.2.0.tgz再解压即可。

查看topic列表

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [[email protected] bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

使用生产者和消费者测试

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

./kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
>def
>opq

另打开一个终端窗口,进入kafka_2.12-2.2.0/bin目录

(base) [[email protected] bin]# ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
abc
def
opq