Linux安装Kafka(单机版)
程序员文章站
2022-03-03 14:46:42
...
-
提前搭建好zookeeper并启动
这里只是为了入门学习,所以按照单机的版本,进行搭建。 -
下载资源并解压
wget -P /usr/local https://mirror.bit.edu.cn/apache/kafka/2.4.0/kafka_2.11-2.4.0.tgz
查看官网的发布版本
3. 创建日志目录
由于kafka默认的目录是在tmp下,tmp 一般容易收到系统的清理,所以不建议存放这些日志,需要自己创建一个并替换掉配置。
cd kafka_2.11-2.4.0
mkdir logs
[[email protected]_0_15_centos kafka_2.11-2.4.0]# ll
total 76
drwxr-xr-x 3 root root 4096 Dec 10 00:51 bin
drwxr-xr-x 2 root root 4096 Mar 18 23:22 config
-rw-rw-r-- 1 root root 19356 Mar 18 23:25 hs_err_pid12417.log
drwxr-xr-x 2 root root 4096 Mar 18 23:15 libs
-rw-r--r-- 1 root root 32216 Dec 10 00:46 LICENSE
drwxrwxr-x 2 root root 4096 Mar 18 23:25 logs
-rw-r--r-- 1 root root 337 Dec 10 00:46 NOTICE
drwxr-xr-x 2 root root 4096 Dec 10 00:51 site-docs
- 修改配置
默认配置文件路径:安装路径下的config/server.properties 。
主要修改并确认以下的几个值:
broker.id=0 # 每个kafka服务器的唯一标识(必须整数类型)
port=9092 # 运行端口号
host.name=127.0.0.1 #服务器IP地址,修改为自己的服务器IP
log.dirs=/usr/local/kafka_2.11-2.4.0/logs#日志存放路径,上面创建的目录
zookeeper.connect=localhost:2181 #zookeeper地址和端口,最好提前开启zookeeper,方便后续kafka是否搭建成功的测试
- 启动
使用安装目录下的bin/kafka-server-starter.sh
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-server-start.sh config/server.properties
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/kafka_2.11-2.4.0/hs_err_pid15276.log
很显然上面是没有启动的成功的,大概意思就是内存太小了。
由于环境条件有限,只能去修改对应的所需的内存大小了,也就是修改配置文件的启动限制大小。需要修改的2个文件都在bin目录下。
- bin/kafka-server-start.sh (搜索关键词:KAFKA_HEAP_OPTS)
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
把Xms1G 缩小成Xms128M
再尝试启动
[2020-03-18 23:54:56,629] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-03-18 23:54:56,680] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-03-18 23:54:56,682] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-03-18 23:54:56,682] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
[2020-03-18 23:54:56,682] INFO Kafka startTimeMs: 1584546896680 (org.apache.kafka.common.utils.AppInfoParser)
[2020-03-18 23:54:56,683] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
看到最后一行的【[KafkaServer id=0] started 】说明启动成功。
如果需要kafka运行再后台的话 ,在启动命令后追加一个&符号就好了
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-server-start.sh config/server.properties &
- 常用命令
- 停止
bin/kafka-server-stop.sh - 创建Topic:(创建一个news的topic,设定1个副本,设定1个分区)
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-topics.sh -create --zookeeper 127.0.0.1:2181 --topic news --replication-factor 1 --partitions 1
Created topic news.
查看还可以选择的参数列表:
Option Description
------ -----------
--alter Alter the number of partitions,
replica assignment, and/or
configuration for the topic.
--at-min-isr-partitions if set when describing topics, only
show partitions whose isr count is
equal to the configured minimum. Not
supported with the --zookeeper
option.
--bootstrap-server <String: server to REQUIRED: The Kafka server to connect
connect to> to. In case of providing this, a
direct Zookeeper connection won't be
required.
--command-config <String: command Property file containing configs to be
config property file> passed to Admin Client. This is used
only with --bootstrap-server option
for describing and altering broker
configs.
--config <String: name=value> A topic configuration override for the
topic being created or altered.The
following is a list of valid
configurations:
cleanup.policy
compression.type
delete.retention.ms
file.delete.delay.ms
flush.messages
flush.ms
follower.replication.throttled.
replicas
index.interval.bytes
leader.replication.throttled.replicas
max.compaction.lag.ms
max.message.bytes
message.downconversion.enable
message.format.version
message.timestamp.difference.max.ms
message.timestamp.type
min.cleanable.dirty.ratio
min.compaction.lag.ms
min.insync.replicas
preallocate
retention.bytes
retention.ms
segment.bytes
segment.index.bytes
segment.jitter.ms
segment.ms
unclean.leader.election.enable
See the Kafka documentation for full
details on the topic configs.It is
supported only in combination with --
create if --bootstrap-server option
is used.
--create Create a new topic.
--delete Delete a topic
--delete-config <String: name> A topic configuration override to be
removed for an existing topic (see
the list of configurations under the
--config option). Not supported with
the --bootstrap-server option.
--describe List details for the given topics.
--disable-rack-aware Disable rack aware replica assignment
--exclude-internal exclude internal topics when running
list or describe command. The
internal topics will be listed by
default
--force Suppress console prompts
--help Print usage information.
--if-exists if set when altering or deleting or
describing topics, the action will
only execute if the topic exists.
Not supported with the --bootstrap-
server option.
--if-not-exists if set when creating topics, the
action will only execute if the
topic does not already exist. Not
supported with the --bootstrap-
server option.
--list List all available topics.
--partitions <Integer: # of partitions> The number of partitions for the topic
being created or altered (WARNING:
If partitions are increased for a
topic that has a key, the partition
logic or ordering of the messages
will be affected). If not supplied
for create, defaults to the cluster
default.
--replica-assignment <String: A list of manual partition-to-broker
broker_id_for_part1_replica1 : assignments for the topic being
broker_id_for_part1_replica2 , created or altered.
broker_id_for_part2_replica1 :
broker_id_for_part2_replica2 , ...>
--replication-factor <Integer: The replication factor for each
replication factor> partition in the topic being
created. If not supplied, defaults
to the cluster default.
--topic <String: topic> The topic to create, alter, describe
or delete. It also accepts a regular
expression, except for --create
option. Put topic name in double
quotes and use the '\' prefix to
escape regular expression symbols; e.
g. "test\.topic".
--topics-with-overrides if set when describing topics, only
show topics that have overridden
configs
--unavailable-partitions if set when describing topics, only
show partitions whose leader is not
available
--under-min-isr-partitions if set when describing topics, only
show partitions whose isr count is
less than the configured minimum.
Not supported with the --zookeeper
option.
--under-replicated-partitions if set when describing topics, only
show under replicated partitions
--version Display Kafka version.
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
- 列出所有的Topic
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-topics.sh -list -zookeeper 127.0.0.1:2181
news
- 启动Producer并发送消息
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic news
>e^H^Hhello
>e^H^C[[email protected]_0_15_centos kafka_2.11-2.4.0]#
当出现了> 符号的时候,表示等待发送消息的录入了。
结束可以选择键 ctrl + c 停止输入。
- 启动Consumer并接收消息
[[email protected]_0_15_centos kafka_2.11-2.4.0]# bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:2181 --topic news --from-beginning
上述的每条命令其实都有很多参数,可以在命令行里随意输错一个,Enter之后既可弹出参数列表。
上一篇: Linux下搭建Zookeeper
下一篇: Docker 安装
推荐阅读
-
linux php编译安装,linuxphp编译安装_PHP教程
-
由于p3006854_9204_linux.zip在Linux 5.5安装导致的错误解决办法
-
Linux下安装php-soap通过重新编译php过程_PHP教程
-
linux - 安装php错误I was not able to diagnose which libmcrypt version you
-
Linux下编译安装Redis以及主从复制配置
-
Linux中RedHat下安装Python2.7开发环境的详细介绍
-
linux memcache安装配置方法
-
PHP7在linux下的安装步骤
-
linux NFS安装配置及常见问题、/etc/exports配置文件、showmount命令
-
Linux平台下MySQL 5.5的编译安装【RHEL5.4】