电商数仓-(项目经验之Flume组件,日志采集Flume配置,Flume的ETL和分类型拦截器)
集群规划
项目经验之Flume组件
1.Source
(1)Taildir Source相比Exec Source、Spooling Directory Source的优势TailDir Source:断点续传、多目录。Flume1.6以前需要自己自定义Source记录每次读取文件位置,实现断点续传。
断点续传的意思:断了从断了那个点开始
举个例子:1-100 在40断了 下次开始就从40开始Exec Source
可以实时搜集数据,但是在Flume不运行或者Shell命令出错的情况下,数据将会丢失。Spooling Directory Source
监控目录,不支持断点续传。
(2)batchSize大小如何设置?
答:Event 1K左右时,500-1000合适(默认为100)
2.Channel采用 Kafka Channel,省去了 Sink,提高了效率。KafkaChannel 数据存储在 Kafka 里面,所以数据是存储在磁盘中。
注意在 Flume1.7 以前,Kafka Channel 很少有人使用,因为发现 parseAsFlumeEvent 这个 配置起不了作用。也就是无论parseAsFlumeEvent配置为true还是false, 都会转为Flume Event。 这样的话,造成的结果是,会始终都把 Flume 的 headers 中的信息混合着内容一起写入 Kafka的消息中,这显然不是我所需要的,我只是需要把内容写入即可。
日志采集Flume配置
1.Flume配置分析
Flume直接读log日志的数据,log日志的格式是app-yyyy-mm-dd.log。
2.(1)在/export/servers/flume/conf目录下创建file-flume-kafka.conf文件
vim file-flume-kafka.conf
a1.sources = r1
a1.channels = c1 c2
# configure source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /export/servers/flume/log_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1 c2
#interceptor
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = com.LogETLInterceptor$Builder
a1.sources.r1.interceptors.i2.type = com.LogTypeInterceptor$Builder
a1.sources = r1
a1.channels = c1 c2
# configure source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /export/servers/flume/log_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1 c2
#interceptor
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = com.LogETLInterceptor$Builder
a1.sources.r1.interceptors.i2.type = com.LogTypeInterceptor$Builder
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = topic
a1.sources.r1.selector.mapping.topic_start = c1
a1.sources.r1.selector.mapping.topic_event = c2
# configure channel
a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c1.kafka.bootstrap.servers = hadoop12:9092,hadoop13:9092,hadoop14:9092
a1.channels.c1.kafka.topic = topic_start
a1.channels.c1.parseAsFlumeEvent = false
a1.channels.c1.kafka.consumer.group.id = flume-consumer
a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c2.kafka.bootstrap.servers = hadoop12:9092,hadoop13:9092,hadoop14:9092
a1.channels.c2.kafka.topic = topic_event
a1.channels.c2.parseAsFlumeEvent = false
a1.channels.c2.kafka.consumer.group.id = flume-consumer
注意:com.LogETLInterceptor$Builder和com.LogTypeInterceptor$Builder是自定义的拦截器的全类名。需要根据用户自定义的拦截器做相应修改。
Flume的ETL和分类型拦截器
1、实现接口
2、重写4个方法(初始化、单event、双event、close )
event能获取到body header
ETL =>body =》主要判断的就是json数据是否以(开头以}结尾服务器时间(长度13全部是数字)
分类型拦截器=>bodyheader=根据body区分类型,把相应数据―添加到头里面topic,start_topic/event_topic
3、静态内部类Builder
new对象
4、打包上传集群
本项目中自定义了两个拦截器,分别是:ETL拦截器、日志类型区分拦截器。
ETL拦截器主要用于,过滤时间戳不合法和Json数据不完整的日志
日志类型区分拦截器主要用于,将启动日志和事件日志区分开来,方便发往Kafka的不同Topic。
(1)创建Maven工程flume-interceptor
(2)创建包名:com
(3)在pom.xml文件中添加如下配置
<dependencies>
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-core</artifactId>
<version>1.7.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
在com包下创建LogETLInterceptor类名Flume ETL拦截器LogETLInterceptor
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
public class LogETLInterceptor implements Interceptor {
@Override
public void initialize() {
}
@Override
public Event intercept(Event event) {
//清洗数据ETL { } =》 { xxx 脏数据
//1 获取日志
byte[] body = event.getBody();
String log = new String(body, Charset.forName("UTF-8"));
//2 区分类型
if (log.contains("start")) {
//验证启动日志的逻辑
if (LogUtils.validateStart(log)) {
return event;
}
} else {
//验证事件日志的逻辑
if (LogUtils.validateEvent(log)) {
return event;
}
}
return null;
}
@Override
public List<Event> intercept(List<Event> events) {
//多Event处理
ArrayList<Event> interceptors = new ArrayList<>();
//取出校验合格数据返回
for (Event event : events) {
Event intercept1 = intercept(event);
if (intercept1 != null){
interceptors.add(intercept1);
}
}
return interceptors;
}@Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
@Override
public Interceptor build() {
return new LogETLInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
Flume日志过滤工具类
import org.apache.commons.lang.math.NumberUtils;
public class LogUtils {
//验证启动日志
public static boolean validateStart(String log) {
//{xxxxxxxx}
if (log == null){
return false;
}
//判断数据是否是{开头 }结尾
if (!log.trim().startsWith("{") || !log.trim().endsWith("}")){
return false;
}
return true;
}
//验证事件日志
public static boolean validateEvent(String log) {
//服务器事件 | 日志内容
if (log == null){
return false;
}
//切割
String[] logContents = log.split("\\|");
if (logContents.length != 2){
return false;
}
//校验服务器时间(长度必须是13位,必须全部是数字)
if (logContents[0].length() != 13 || !NumberUtils.isDigits(logContents[0])){
return false;
}
//校验日志格式
if (!logContents[1].trim().startsWith("{") || !logContents[1].trim().endsWith("}")){
return false;
}
return true;
}
}
Flume日志类型区分拦截器LogTypeInterceptor
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class LogTypeInterceptor implements Interceptor {
@Override
public void initialize() {
}
@Override
public Event intercept(Event event) {
//区分类型 start event类型
//body 和 header
//获取body
byte[] body = event.getBody();
String log = new String(body, Charset.forName("UTF-8"));
//获取头信息
Map<String, String> headers = event.getHeaders();
//业务逻辑判断
if (log.contains("start")){
headers.put("topic","topic_start");
}else {
headers.put("topic","topic_event");
}
return event;
}
@Override
public List<Event> intercept(List<Event> events) {
ArrayList<Event> interceptors = new ArrayList<>();
for (Event event : events) {
Event intercept1 = intercept(event);
interceptors.add(intercept1);
}
return interceptors;
}
@Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
@Override
public Interceptor build() {
return new LogTypeInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
打包
拦截器打包之后,只需要单独包,不需要将依赖的包上传。打包之后要放入Flume的lib文件夹下面。
1.需要先将打好的包放入到hadoop12的/export/servers/flume/lib文件夹下面
分发Flume到hadoop13、hadoop14
xsync flume/
bin/flume-ng agent --name a1 --conf-file conf/file-flume-kafka.conf &
上一篇: So it is Mongoose