欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kafka_2.11-0.10.2.0集群安装

程序员文章站 2022-06-14 10:33:34
...

kafka_2.11-0.10.2.0集群安装

集群环境:

spark1 IP:192.168.6.137
spark2 IP:192.168.6.138
spark3 IP:192.168.6.139

1、下载kafka_2.11-0.10.2.0

点此下载 kafka_2.11-0.10.2.0

2、在spark1、spark2和spark3机子上解压每台服务器都配置server.properties 和 zookeeper.properties:

$ cd /usr/local
$ wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz
$ tar -xzf kafka_2.11-0.10.2.0.tgz
$ cd kafka_2.11-0.10.2.0

3、每台服务器都配置server.properties 和 zookeeper.properties:

配置server.properties

spark1

broker.id=1
listeners=PLAINTEXT://192.168.6.137:9092
advertised.listeners=PLAINTEXT://192.168.6.137:9092
log.dirs=/usr/local/kafka_2.11-0.10.2.1/log/kafka
num.partitions=3
zookeeper.connect=192.168.6.137:2181,192.168.6.138:2181,192.168.6.139:2181

spark2

broker.id=2
listeners=PLAINTEXT://192.168.6.138:9092
advertised.listeners=PLAINTEXT://192.168.6.138:9092
log.dirs=/usr/local/kafka_2.11-0.10.2.1/log/kafka
num.partitions=3
zookeeper.connect=192.168.6.137:2181,192.168.6.138:2181,192.168.6.139:2181

spark3

broker.id=3
listeners=PLAINTEXT://192.168.6.139:9092
advertised.listeners=PLAINTEXT://192.168.6.139:9092
log.dirs=/usr/local/kafka_2.11-0.10.2.1/log/kafka
num.partitions=3
zookeeper.connect=192.168.6.137:2181,192.168.6.138:2181,192.168.6.139:2181

配置zookeeper.properties

spark1、spark2、spark3

dataDir=/usr/local/kafka_2.11-0.10.2.1/zookeeper
dataLogDir=/usr/local/kafka_2.11-0.10.2.1/log/zookeeper
clientPort=2181

maxClientCnxns=100
tickTime=2000
initLimit=10
syncLimit=5

server.1=spark1:2888:3888
server.2=spark2:2888:3888
server.3=spark3:2888:3888

分别在3台服务器的/usr/local/kafka_2.11-0.10.2.1/zookeeper目录中新建一个myid的文件,里面存当前zookeeper的PID:
spark1

echo "1" >myid

spark2

echo "2" >myid

spark1

echo "2" >myid

4、3台服务器分别启动zookeeper

$ $KAFKA_HOME/bin/zookeeper-server-start.sh -daemon $KAFKA_HOME/config/zookeeper.properties

5、3台服务器分别启动kafka:

$ $KAFKA_HOME/bin/kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties

6、在Server1上安装kafka-manager:

安装sbt

$ cd /usr/local
$ wget https://dl.bintray.com/sbt/native-packages/sbt/0.13.15/sbt-0.13.15.tgz
$ tar -zxvf sbt-0.13.15.tgz
$ export SBT_HOME=/home1/irteam/apps/sbt
$ export PATH=$SBT_HOME/bin:$PATH
$ sbt sbt-version (用于测试是否成功,需要等一会)

安装kafka-manager

$ git clone https://github.com/yahoo/kafka-manager.git
$ cd kafka-manager/
$ sbt clean dist (执行后需要等很久...)
$ cd target/universal/(进入后会发现多了一个kafka-manager-1.3.3.7.zip,这是编译生成的)
$ cd ~/apps
$ mkdir kafka-manager-web
$ cp /home1/irteam/apps/kafka-manager/target/universal/kafka-manager-1.3.3.7.zip kafka-manager-web
$ cd kafka-manager-web
$ unzip kafka-manager-1.3.3.7.zip
$ rm -f kafka-manager-1.3.3.7.zip (删除zip包)
$ cd kafka-manager-1.3.3.7/conf/
$ vi application.conf
只需要修改这一行即可,把3台服务器的zookeeper地址加入到配置中:
kafka-manager.zkhosts="192.168.6.137:2181,192.168.6.138:2181,192.168.6.139:2181"

启动kafka-manager

$ cd /usr/local/kafka-manager-web
$  /usr/local/kafka-manager-web/bin/kafka-manager  -Dconfig.file=/usr/local/kafka-manager-web/conf/application.conf -Dapplication.home=/usr/local/kafka-manager-web -Dhttp.port=8888 > /dev/null &

在地址栏中输入http://192.168.6.137:8888访问即可

相关标签: spark kafka