安装Zookeeper以及Kafka(Ubuntu)
程序员文章站
2024-01-12 12:34:04
...
安装Zookeeper以及Kafka(Ubuntu)
一、安装zookeeper
1、解压zookeeper至虚拟机
tar -zxvf apache-zookeeper-3.5.8-bin.tar.gz -C ~/training
2、配置zookeeper
mv/home/niit01/training/zookeeper3.5.8/conf/zoo_sample.cfg /home/niit01/training/zookeeper-3.5.8/conf/zoo.cfg
vim /home/niit01/training/zookeeper-3.5.8/conf/zoo.cfg
配置内容:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataDir=/home/niit01/training/zookeeper-3.5.8/data
dataLogDir=/home/niit01/training/zookeeper-3.5.8/logs
server.1=hadoop01:2888:3888
格式是:
xxx/xxx/xxx/zookeeper-x.x.x/data
xxx/xxx/xxx/zookeeper-x.x.x/logs
server.1=xxx:2888:3888
3、创建myid
mkdir -p /home/hadoop/bigData/zookeeper-3.5.7/data
cd /home/hadoop/bigData/zookeeper-3.5.7/data
touch myid
echo "1">>myid #往myid中写入1,对应server.X={IP}:2888:3888 中的x数字
4、开放zookeeper端口
sudo ufw allow 2888 #添加2888防火墙例外
sudo ufw allow 3888 #添加3888防火墙例外
sudo ufw allow 2181 #添加2181防火墙例外
5、添加环境变量
1)进入配置文件
sudo vim /etc/profile
2)添加环境
#zookeeper
export ZK_HOME=/home/niit01/training/zookeeper-3.5.8
export PATH=$ZK_HOME/bin:$PATH
3)环境生效
source /etc/profile
6、启动zookeeper
1)启动
进入zookeeper的bin目录下
./zkServer.sh start
2)查看状态
./zkServer.sh status
二、安装kafka
1、解压kafka
tar -zxvf kafka_2.11-2.3.1.tgz -C ~/training
2、启动zookeeper并指定 zookeeper 配置文件
bin目录下:
./zookeeper-server-start.sh -daemon config/zookeeper.properties
3、配置server.properties
1)在此路径下
2)进去配置
sudo vim server.properties
3)配置内容
部分需要的文本:
zookeeper.connect=hadoop01:2181
listeners=PLAINTEXT://hadoop01:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
delete.topics.enable=true
4、启动kafka
bin路径下启动kafka
./kafka-server-start.sh /config/server.properties
若报错找不到配置文件
则使用绝对路径
./kafka-server-start.sh /home/niit01/training/kafka_2.11-2.3.1/config/server.properties
若报错这个xxx shutdown:
则回到server.properties认真检查
- broke.id=1
- log.dirs 路径
- zookeeper.connect 主机名
- listener 主机名
- 各个配置的主机名设置
成功启动图:
查看进程
5、可以通过zookeeper查看kafka的元数据信息
zookeeper bin目录下
./zkCli.sh
#查看根下多了很多目录
ls /
#查看/brokers/ids,可以看到有broker已经加入
ls /brokers/ids
#查看/brokers/topics,目前为空,说明还没有创建任何的topic
ls /brokers/topics
6、启动消费者,启动生产者
1)启动消费者
./kafka-console-consumer.sh --bootstrap-server hadoop01:9092 --topic test
若遇到这个报错
则去server.properties检查advertised.listeners是否打开以及配置是否正确
2)启动生产者
./kafka-console-producer.sh --broker-list hadoop01:9092 --topic test
两个都成功的jps进程
推荐阅读
-
安装Zookeeper以及Kafka(Ubuntu)
-
Ubuntu下安装Kafka
-
ubuntu 16.04 安装 python2.7 以及 cv2, dist-package 和 site-package 的区别, import cv2 出问题解答
-
关于Ubuntu 16.04系统安装以及Nvidia显卡驱动安装的一些问题
-
安装win10+Ubuntu双系统后遇到的一些问题以及解决办法
-
在ubuntu18.04下安装ros,以及一些踩坑总结
-
Ubuntu14下LAMP环境的安装以及yaf扩展的安装,ubuntu14yaf_PHP教程
-
Ubuntu下安装Anaconda以及一些调试命令
-
ubuntu 12.04 安装sublime Text 2 以及破解注册教程
-
基于ubuntu16.04伪分布式安装hadoop2.9.1以及hive2.3.1