Apache Flink 快速实践(Quickstart)
程序员文章站
2024-03-23 13:20:04
...
Quickstart
Setup: Download and Start Flink
在Linux, Mac OS X或者 Windows上运行Flink 只需要 JAVA 7或者以上的版本,对于Windows用户来讲请参考Flink on Windows、
你可以使用下面的命令来查看当前安装的JAVA版本
java -version
如果你安装的是java8的版本,你看到的结果应该类似下面这样:
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
下载与编译
从仓库克隆源代码
$ git clone https://github.com/apache/flink.git
$ cd flink
$ mvn clean package -DskipTests # this will take up to 10 minutes
$ cd build-target # this is where Flink is installed to
启动一个Flink 本地集群
$ ./bin/start-local.sh # Start Flink
通过 http://localhost:8081 检查JobManager本地客户端 保证所有的组件都运行成功
你也可以通过日志来检查系统是否已经运行
$ tail log/flink-*-jobmanager-*.log
INFO ... - Starting JobManager
INFO ... - Starting JobManager web frontend
INFO ... - Web frontend listening at 127.0.0.1:8081
INFO ... - Registered TaskManager at 127.0.0.1 (akka://flink/user/taskmanager)
代码实践
你可以通过GITHUB 来找到下面这段SocketWindowWordCount 的代码 java
public class SocketWindowWordCount {
public static void main(String[] args) throws Exception {
// the port to connect to
final int port;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
port = params.getInt("port");
} catch (Exception e) {
System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
return;
}
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.socketTextStream("localhost", port, "\n");
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
@Override
public void flatMap(String value, Collector<WordWithCount> out) {
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word")
.timeWindow(Time.seconds(5), Time.seconds(1))
.reduce(new ReduceFunction<WordWithCount>() {
@Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Socket Window WordCount");
}
// Data type for words with count
public static class WordWithCount {
public String word;
public long count;
public WordWithCount() {}
public WordWithCount(String word, long count) {
this.word = word;
this.count = count;
}
@Override
public String toString() {
return word + " : " + count;
}
}
}
Run the Example
现在,我们要运行这个Flink应用程序。它将从一个套接字(socket )中读取文本,每5秒钟打印出在前5秒钟内每一个不同的单词的出现次数
- 首先我们使用netcat 来启动本地服务
$ nc -l 9000
- 提交Flink程序
$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123
Using address 127.0.0.1:6123 to connect to JobManager.
JobManager web interface address http://127.0.0.1:8081
Starting execution of program
Submitting job with JobID: 574a10c8debda3dccd0c78a3bde55e1b. Waiting for job completion.
Connected to JobManager at Actor[akka.tcp://aaa@qq.com127.0.0.1:6123/user/jobmanager#297388688]
11/04/2016 14:04:50 Job execution switched to status RUNNING.
11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to SCHEDULED
11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to DEPLOYING
11/04/2016 14:04:50 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to SCHEDULED
11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to DEPLOYING
11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to RUNNING
11/04/2016 14:04:51 Source: Socket Stream -> Flat Map(1/1) switched to RUNNING
这个程序链接到socket上并等待输出,你可以通过web接口来验证运行结果是否符合预期。
单词数将在每5面统计一次并打印到stdout上,监控JobManager的输出文件并随机输入一些单词到nc
$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye
$ tail -f log/flink-*-jobmanager-*.out
lorem : 1
bye : 1
ipsum : 4
- 关闭Flink
$ ./bin/stop-local.sh
推荐阅读
-
Apache Flink 快速实践(Quickstart)
-
Apache Flink Quickstart
-
基于 Flink 的实时数仓生产实践 apache腾讯金融工作申诉
-
Apache Flink 进阶(三):Checkpoint 原理解析与应用实践 apache算法框架Objective-C
-
企业实践 | 如何更好地使用 Apache Flink 解决数据计算问题? apache阿里巴巴sql农业
-
1、Flink快速入门(Quickstart)
-
Apache Flink 进阶(三):Checkpoint 原理解析与应用实践 apache算法框架Objective-C
-
企业实践 | 如何更好地使用 Apache Flink 解决数据计算问题? apache阿里巴巴sql农业
-
基于 Flink 的实时数仓生产实践 apache腾讯金融工作申诉