Kafka集群安装手册
Kafka群安装手册
CentOS7
目 录
1 概述
2 部署环境
3 系统环境搭建
3.1 修改yum源
4 安装JAVA环境
4.1 配置环境变量
4.2 检验
5 关闭防火墙
6 配置服务器间信任ssh(可不做)
7 kafka集群安装
7.1 解压缩
7.2 编辑配置文件
7.3 配置zookeeper
7.4 启动zookeeper
7.5 启动kafka
7.6 检查测试
7.7 操作kafka
7.8 开机自启动
7.8.1 zookeeper开机自启
7.8.2 kafka开机自启
7.9 自启动后检测
8 zookeeper安装(如果想自己单独安装的话)
8.1 解压安装包
8.2 配置zookeeper
8.3 创建myid文件
8.4 配置环境变量
8.5 启动
8.6 直接测试启动
8.7 正常后台启动
8.8 设置开机启动
1 概述
Kafka作为消息中间件具有高吞吐率,低延时,消息可靠性高的特点,可应用于大批量消息传递的场景下作为消息中间件进行部署
2 部署环境
启动3个虚拟机进行部署
IP 主机名 备注 用户名 口令
192.168.5.39 kafka-1 kafka节点1 root root
192.168.5.40 kafka-2 kafka节点2 root root
192.168.5.41 kafka-3 kafka节点3 root root
每节点统一分配:内存8G,CPU 2颗
3 系统环境搭建
3.1 修改yum源
yum install -y wget vim
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
yum update -y
把节点名称解析添加到/etc/hosts文件中,并注释掉127.0.0.1
vim /etc/hosts
192.168.5.39 kafka-1
192.168.5.40 kafka-2
192.168.5.41 kafka-3
4 安装JAVA环境
版本: jdk-8u231-linux-x64.tar.gz
tar -zxvf jdk-8u231-linux-x64.tar.gz -C /usr/local
4.1 配置环境变量
vim /etc/profile
在文件中最后增加如下内容
export JAVA_HOME=/usr/local/jdk1.8.0_231
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin
保存后刷新环境变量
source /etc/profile
配置完成
4.2 检验
java -version
javac -version
5 关闭防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl status firewalld.service
6 配置服务器间信任ssh(可不做)
配置信任为了复制文件方便,可不做,但每次用scp复制文件的时候需要手动输入密码.
(3台机器都需要)
ssh-******
ssh-copy-id -i .ssh/id_rsa.pub aaa@qq.com-2
ssh-copy-id -i .ssh/id_rsa.pub aaa@qq.com-3
7 kafka集群安装
项目主页: http://kafka.apache.org/
下载地址: https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.4.1/kafka_2.13-2.4.1.tgz
以下操作也都需要在三台服务器上执行.
7.1 解压缩
tar -zxvf kafka_2.13-2.4.1.tgz -C /opt/
7.2 编辑配置文件
vim /opt/kafka_2.13-2.4.1/config/server.properties
仔细查看配置文件中内容,如果有重复的必须删除
kafka-1配置
# 编号要求唯一性,以ip地址进行标记
broker.id=39
port=9092
host.name=kafka-1
log.dirs=/data/kafka/logs
zookeeper.connect=kafka-1:2181,kafka-2:2181,kafka-3:2181
listeners = PLAINTEXT://kafka-1:9092
advertised.listeners=PLAINTEXT://kafka-1:9092
zookeeper.connect=192.168.5.39:2181,192.168.5.40:2181,192.168.5.41:2181
kafka-2配置
# 编号要求唯一性,以ip地址进行标记
broker.id=40
port=9092
host.name=kafka-2
log.dirs=/data/kafka/logs
zookeeper.connect=kafka-1:2181,kafka-2:2181,kafka-3:2181
listeners = PLAINTEXT://kafka-2:9092
advertised.listeners=PLAINTEXT://kafka-2:9092
zookeeper.connect=192.168.5.39:2181,192.168.5.40:2181,192.168.5.41:2181
kafka-3配置
# 编号要求唯一性,以ip地址进行标记
broker.id=41
port=9092
host.name=kafka-3
log.dirs=/data/kafka/logs
zookeeper.connect=kafka-1:2181,kafka-2:2181,kafka-3:2181
listeners = PLAINTEXT://kafka-3:9092
advertised.listeners=PLAINTEXT://kafka-3:9092
zookeeper.connect=192.168.5.39:2181,192.168.5.40:2181,192.168.5.41:2181
保存后去建立目录
mkdir -p /data/kafka/logs
7.3 配置zookeeper
本次利用kafka自带的zookeeper进行配置,如果需要单独安装zookeeper,可以参见第八章
vim /opt/kafka_2.13-2.4.1/config/zookeeper.properties
改写文件内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
server.1=kafka-1:12888:13888
server.2=kafka-2:12888:13888
server.3=kafka-3:12888:13888
7.4 启动zookeeper
三台机器分别启动zookeeper
/opt/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /opt/kafka_2.13-2.4.1/config/zookeeper.properties
7.5 启动kafka
然后三台机器分别启动kafka
/opt/kafka_2.13-2.4.1/bin/kafka-server-start.sh -daemon /opt/kafka_2.13-2.4.1/config/server.properties
7.6 检查测试
三台机器输入jps后将提示如下内容,则说明kafka启动成功
7.7 操作kafka
7.8 开机自启动
7.8.1 zookeeper开机自启
vim /lib/systemd/system/zookeeper.service
[Unit]
Description=Zookeeper service
After=network.target
[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/jdk1.8.0_231/bin"
User=root
Group=root
ExecStart=/opt/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /opt/kafka_2.13-2.4.1/config/zookeeper.properties
ExecStop=/opt/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
注意java的路径和启动路径
刷新配置
systemctl daemon-reload
开机自启动
systemctl enable zookeeper.service
7.8.2 kafka开机自启
vim /lib/systemd/system/kafka.service
刷新配置
systemctl daemon-reload
开机自启动
systemctl enable kafka.service
[Unit]
Description=Zookeeper service
After=network.target
[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/jdk1.8.0_231/bin"
User=root
Group=root
ExecStart=/opt/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /opt/kafka_2.13-2.4.1/config/zookeeper.properties
ExecStop=/opt/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
7.9 自启动后检测
/opt/kafka_2.13-2.4.1/bin/ zookeeper-shell.sh kafka-1:2181
8 zookeeper安装(如果想自己单独安装的话)
官方首页: http://zookeeper.apache.org/
下载地址: https://downloads.apache.org/zookeeper/zookeeper-3.6.0/apache-zookeeper-3.6.0-bin.tar.gz
下面的操作步骤在三台机器上都需要做一遍
8.1 解压安装包
tar -zxvf apache-zookeeper-3.6.0-bin.tar.gz -C /usr/loc
8.2 配置zookeeper
cd /usr/local/apache-zookeeper-3.6.0-bin/conf/
mv zoo_sample.cfg zoo.cfg
Zoo.cfg里面修改zookeeper数据存放位置
dataDir=/data/zookeeper
在文件最后添加其他节点信息(注意:每行后面不能有空格,否则启动出错)
server.1=kafka-1:12888:13888
server.2=kafka-2:12888:13888
server.3=kafka-3:12888:13888
kafka-1:代表zookeeper的主机名
12888端口:代表访问Zookeeper的端口
13888端口:代表重新选举leader的端口
8.3 创建myid文件
mkdir /data/zookeeper
echo 1 /> /data/zookeeper/myid
以上指令在三台机器上都需要做,根据自己的ID,写入不同的myid文件,例如kafka-1写入1,kafka-2写入2,以此类推
8.4 配置环境变量
在/etc/profile中增加环境变量配置
#zookeeper environment
export ZOOKEEPER_HOME=/usr/local/apache-zookeeper-3.6.0-bin/
export PATH=$ZOOKEEPER_HOME/bin:$PATH
source /etc/profile
8.5 启动
8.6 直接测试启动
建议调试期间用测试启动,这样能够直观的看到返回的启动信息
/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start-foreground
8.7 正常后台启动
测试启动没有问题后,可以使用下列命令来正常启动
/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start
8.8 设置开机启动
vim /lib/systemd/system/zookeeper.service
[Unit]
Description=Zookeeper service
After=network.target
[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/jdk1.8.0_231/bin"
User=root
Group=root
ExecStart=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start
ExecStop=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh stop
Restart=on-failure
[Install]
WantedBy=multi-user.target
刷新配置
systemctl daemon-reload
加入开机启动
systemctl enable zookeeper.service
启动zookeeper
systemctl start zookeeper.service
把这个流程在是三个机器上都进行设置,然后都进行reboot,启动后确认一下三台机器的当前状态
说明kafka-1在自动选举中作为leader,其他两个是follower,启动成功
上一篇: 理解简单工厂模式和策略模式的区别