欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

解决flume向kafka发送 均分到各个partition中

程序员文章站 2022-05-12 10:03:27
...

官网中虽然说没有key 会随机分配到partition,但是不知道为什么在我这没有出现这种效果,所以我加了一个key,需要加个source拦截器

运行flume-ng agent --conf conf --conf-file test.sh --name a1 -Dflume.root.logger=INFO,console

 

# example.conf: A single-node Flume configuration

 

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

 

# Describe/configure the source

a1.sources.r1.type =exec

a1.sources.r1.command =tail -F /opt/access.log

 

# Describe the sink

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k1.topic = test

a1.sinks.k1.brokerList = node1:9092,node2:9092,node3:9092

a1.sinks.k1.metadata.broker.list = node1:9092,node2:9092,node3:9092

a1.sinks.k1.requiredAcks = 1

a1.sinks.k1.batchSize = 20

a1.sinks.k1.channel = c1

 

#source 拦截器

a1.sources.r1.interceptors = i2

a1.sources.r1.interceptors.i2.type = org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder

a1.sources.r1.interceptors.i2.headerName = key

a1.sources.r1.interceptors.i2.preserveExisting = false

 

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

 

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1