Kafka配置SASL安全认证
1、在server.properties中加入以下内容,启用SASL安全认证:
[[email protected] ~]# cat /usr/local/kafka/config/server.properties
listeners=SASL_PLAINTEXT://192.168.1.8:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
2、创建server端的认证文件,此文件不可以添加任何多余的内容,包括注释。最后两行必须要有分号:
[[email protected] ~]# cat /usr/local/kafka/config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret"
user_kafka="kafka#secret"
user_alice="alice#secret";
};
内容解释:配置文件命名为:kafka_server_jaas.conf,放置在/usr/local/kafka/config。
使用user_来定义多个用户,供客户端程序(生产者、消费者程序)认证使用,可以定义多个。
上例我定义了两个用户,一个是kafka,一个是alice,等号后面是对应用户的密码(如user_kafka定义了用户名为kafka,密码为kafka#secret的用户)。
后续配置可能还可以根据不同的用户定义ACL。
3、创建client认证文件,此文件是后面console的生产者和消费者测试使用,同样此文件不可以添加任何多余的内容,包括注释:
[[email protected] ~]# cat /usr/local/kafka/config/kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret";
};
4、添加kafka-server-start.sh认证文件路径,启动kafka时会加载此文件,其他应用服务如果要写数据到kafka,会先匹配用户名和密码:
[[email protected] ~]# cat /usr/local/kafka/bin/kafka-server-start.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
5、添加kafka-console-producer.sh认证文件路径,后面启动生产者测试时使用:
[[email protected] ~]# cat /usr/local/kafka/bin/kafka-console-producer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"
6、添加kafka-console-consumer.sh认证文件路径,后面启动消费者测试时使用:
[[email protected] ~]# cat /usr/local/kafka/bin/kafka-console-consumer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"
7、修改/usr/local/kafka/config/producer.properties,在配置最后加入以下两行内容:
[[email protected] ~]# cat /usr/local/kafka/config/producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
8、修改/usr/local/kafka/config/consumer.properties,要添加的内容和producer的内容一样:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
9、启动kafka,注意观察logs/server.log日志文件中是否有报错:
bin/kafka-server-start.sh -daemon config/server.properties
如果kafka起不来,基本上就是上面几个文件的问题,一个是配置的认证文件路径,再一个是认证文件的内容末尾两行分号是否添加,是否添加了多余的注释。
10、开启一个生产者
kafka-console-producer.sh --broker-list 192.168.1.8:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN
11、开启一个消费者
kafka-console-consumer.sh --bootstrap-server 192.168.1.8:9092 --topic test --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN
12、在metricbeat中配置kafka认证
output.kafka:
hosts: ["192.168.1.8:9092", "192.168.1.9:9092", "192.168.1.10:9092"]
topic: 'metricbeat_95598'
username: kafka
password: kafka#secret
启动metricbeat服务后,在metricbeat的日志中可以看到如下内容,显示认证通过并成功连接到kafka:
2020-08-12T15:49:47.032+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.034+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.035+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.040+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.8:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.040+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.8:9092 (registered as #0)
2020-08-12T15:49:47.041+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.9:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.041+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.9:9092 (registered as #1)
2020-08-12T15:49:47.049+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.10:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.049+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.10:9092 (registered as #2)
然后再次启动一个消费者,让消息输出到前端,可以看到metricbeat生产的消息都打印到了屏幕上。
上一篇: ZK实现SASL认证+Kafka连接ZK
下一篇: java Action 请求封装(一)