欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Kafka安装和简单使用

程序员文章站 2022-03-04 13:48:21
...

安装准备

首先安装zookeeper和scala

安装Zookeeper

下载解压(先安装一台),修改配置文件zoo.cfg

[[email protected] conf]# cp zoo_sample.cfg zoo.cfg
[[email protected] conf]# vi zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.

dataDir=/opt/software/zookeeper/data

# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

主要添加dataDir和三个server。
然后创建data文件夹,创建myid文件,三台机器都要有

[[email protected] conf]# cd ../
[[email protected] zookeeper]#  mkdir data
[[email protected] zookeeper]# touch data/myid
[[email protected] zookeeper]# echo 1 > data/myid

hadoop002/003,使用scp命令复制整个zookeeper文件夹过去,并修改myid文件

[[email protected] software]# scp -r  zookeeper 192.168.204.202:/opt/software/
[[email protected] software]# scp -r  zookeeper 192.168.204.203:/opt/software/

[[email protected] zookeeper]# echo 2 > data/myid
[[email protected] zookeeper]# echo 3 > data/myid
启动Zookeeper集群
[[email protected] bin]# ./zkServer.sh start
[[email protected] bin]# ./zkServer.sh start
[[email protected] bin]# ./zkServer.sh start

查看状态: ./zkServer.sh status

安装Scala

下载解压,配置环境变量

部署Kafka

下载解压,创建logs目录和修改server.properties

[[email protected] kafka]# mkdir logs
[[email protected] kafka]# cd config/
[[email protected] config]# vi server.properties
broker.id=1
port=9092
host.name=192.168.204.201
log.dirs=/opt/software/kafka/logs
zookeeper.connect=192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka

修改以上几处(没有则添加),并配置环境变量
对另外两台做同样操作(注意修改broker.id和host.name对应的值)

启动/停止
启动(三台都启动)
 nohup kafka-server-start.sh config/server.properties &

停止
bin/kafka-server-stop.sh

简单使用

创建topic
bin/kafka-topics.sh --create \
--zookeeper 192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka \
--replication-factor 3 --partitions 3 --topic test

上面的参数通过 kafka-topics.sh --help可以查看
创建之后,启动Producer测试:(broker-list指的是kafka,不是zookeeper)

bin/kafka-console-producer.sh \
--broker-list 192.168.204.201:9092,192.168.204.202:9092,192.168.204.203:9092 --topic test

在另一个终端,启动Consumer,并订阅Topic中生产的消息

bin/kafka-console-consumer.sh \
--zookeeper 192.168.204.201:2181,192.168.204.202:2181,192.168.204.203:2181/kafka \
--from-beginning --topic test

在生产者终端里,输入信息,然后回车,消费者终端如果接收到这些信息,说明部署Kafka成功。

其他

Kafka的元数据存放在zookeeper上。
kafka数据存放在磁盘上。

相关标签: Kafka