Apache Flink本地模式部署

Apache Flink部署模式有好几种,本文主要介绍Apache Flink的本地部署模式。
本地部署模式主要用于开发者程序调试测试使用。

先决条件

  • 运行系统:系统方面没有过多要求,Linux、Mac、Windows均可
  • Java 1.8.x以上,Apache Flink不支持1.8.x以下的版本。

检查系统的Java版本

$ java -version

java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

下载、启动Flink

下载

进入Apache Flink官网下载安装包。下载地址:https://flink.apache.org/downloads.html
我们下载Apache Flink的最新包1.6.2版本

wget http://mirror.metrocast.net/apache/flink/flink-1.6.2/flink-1.6.2-bin-scala_2.11.tgz

解压

$ tar -zxvf flink-1.6.2-bin-scala_2.11.tgz
x flink-1.6.2/
x flink-1.6.2/bin/
x flink-1.6.2/bin/config.sh
x flink-1.6.2/bin/flink
x flink-1.6.2/bin/flink-console.sh
x flink-1.6.2/bin/flink-daemon.sh
x flink-1.6.2/bin/flink.bat
x flink-1.6.2/bin/historyserver.sh
x flink-1.6.2/bin/jobmanager.sh
x flink-1.6.2/bin/mesos-appmaster-job.sh
x flink-1.6.2/bin/mesos-appmaster.sh
x flink-1.6.2/bin/mesos-taskmanager.sh
x flink-1.6.2/bin/pyflink-stream.sh
x flink-1.6.2/bin/pyflink.bat
x flink-1.6.2/bin/pyflink.sh
x flink-1.6.2/bin/sql-client.sh
x flink-1.6.2/bin/standalone-job.sh
x flink-1.6.2/bin/start-cluster.bat
x flink-1.6.2/bin/start-cluster.sh
x flink-1.6.2/bin/start-scala-shell.sh
x flink-1.6.2/bin/start-zookeeper-quorum.sh
x flink-1.6.2/bin/stop-cluster.sh
x flink-1.6.2/bin/stop-zookeeper-quorum.sh
x flink-1.6.2/bin/taskmanager.sh
x flink-1.6.2/bin/yarn-session.sh
x flink-1.6.2/bin/zookeeper.sh
x flink-1.6.2/conf/
x flink-1.6.2/conf/flink-conf.yaml
x flink-1.6.2/conf/log4j-cli.properties
x flink-1.6.2/conf/log4j-console.properties
x flink-1.6.2/conf/log4j-yarn-session.properties
x flink-1.6.2/conf/log4j.properties
x flink-1.6.2/conf/logback-console.xml
x flink-1.6.2/conf/logback-yarn.xml
x flink-1.6.2/conf/logback.xml
x flink-1.6.2/conf/masters
x flink-1.6.2/conf/slaves
x flink-1.6.2/conf/sql-client-defaults.yaml
x flink-1.6.2/conf/zoo.cfg
x flink-1.6.2/examples/
x flink-1.6.2/examples/batch/
x flink-1.6.2/examples/batch/ConnectedComponents.jar
x flink-1.6.2/examples/batch/DistCp.jar
x flink-1.6.2/examples/batch/EnumTriangles.jar
x flink-1.6.2/examples/batch/KMeans.jar
x flink-1.6.2/examples/batch/PageRank.jar
x flink-1.6.2/examples/batch/TransitiveClosure.jar
x flink-1.6.2/examples/batch/WebLogAnalysis.jar
x flink-1.6.2/examples/batch/WordCount.jar
x flink-1.6.2/examples/gelly/
x flink-1.6.2/examples/gelly/flink-gelly-examples_2.11-1.6.2.jar
x flink-1.6.2/examples/python/
x flink-1.6.2/examples/python/batch/
x flink-1.6.2/examples/python/batch/TPCHQuery10.py
x flink-1.6.2/examples/python/batch/TPCHQuery3.py
x flink-1.6.2/examples/python/batch/TriangleEnumeration.py
x flink-1.6.2/examples/python/batch/WebLogAnalysis.py
x flink-1.6.2/examples/python/batch/WordCount.py
x flink-1.6.2/examples/python/batch/__init__.py
x flink-1.6.2/examples/python/streaming/
x flink-1.6.2/examples/python/streaming/fibonacci.py
x flink-1.6.2/examples/python/streaming/word_count.py
x flink-1.6.2/examples/streaming/
x flink-1.6.2/examples/streaming/IncrementalLearning.jar
x flink-1.6.2/examples/streaming/Iteration.jar
x flink-1.6.2/examples/streaming/Kafka010Example.jar
x flink-1.6.2/examples/streaming/SessionWindowing.jar
x flink-1.6.2/examples/streaming/SocketWindowWordCount.jar
x flink-1.6.2/examples/streaming/StateMachineExample.jar
x flink-1.6.2/examples/streaming/TopSpeedWindowing.jar
x flink-1.6.2/examples/streaming/Twitter.jar
x flink-1.6.2/examples/streaming/WindowJoin.jar
x flink-1.6.2/examples/streaming/WordCount.jar
x flink-1.6.2/lib/
x flink-1.6.2/lib/flink-dist_2.11-1.6.2.jar
x flink-1.6.2/lib/flink-python_2.11-1.6.2.jar
x flink-1.6.2/lib/log4j-1.2.17.jar
x flink-1.6.2/lib/slf4j-log4j12-1.7.7.jar
x flink-1.6.2/LICENSE
x flink-1.6.2/log/
x flink-1.6.2/NOTICE
x flink-1.6.2/opt/
x flink-1.6.2/opt/flink-cep-scala_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-cep_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-gelly-scala_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-gelly_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-datadog-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-dropwizard-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-ganglia-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-graphite-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-prometheus-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-slf4j-1.6.2.jar
x flink-1.6.2/opt/flink-metrics-statsd-1.6.2.jar
x flink-1.6.2/opt/flink-ml_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-queryable-state-runtime_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-s3-fs-hadoop-1.6.2.jar
x flink-1.6.2/opt/flink-s3-fs-presto-1.6.2.jar
x flink-1.6.2/opt/flink-sql-client-1.6.2.jar
x flink-1.6.2/opt/flink-streaming-python_2.11-1.6.2.jar
x flink-1.6.2/opt/flink-swift-fs-hadoop-1.6.2.jar
x flink-1.6.2/opt/flink-table_2.11-1.6.2.jar
x flink-1.6.2/README.txt

配置环境变量

在~/.bash_profile文件中添加以下变量:

export $FLINK_HOME=/opt/flink-1.6.2
export $PATH = $PATH:$FLINK_HOME/bin

启动Flink

$ $FLINK_HOME/bin/start-cluster.sh  # Start Flink

通过web前端检查服务是否正常运行
http://localhost:8081
Web前端可以看到有一个可用的TaskManager实例。

在这里插入图片描述

通过jps检查服务启动情况

$ jps
4705 TaskManagerRunner
4940 Jps
4286 StandaloneSessionClusterEntrypoint

还可以通过检查logs目录中的日志文件来验证系统是否正在运行:

flink-1.6.2  tail log/flink-*-standalonesession-*.log
2018-11-16 09:22:03,204 INFO  org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Rest endpoint listening at localhost:8081
2018-11-16 09:22:03,205 INFO  org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - http://localhost:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000
2018-11-16 09:22:03,205 INFO  org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Web frontend listening at http://localhost:8081.
2018-11-16 09:22:03,359 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager .
2018-11-16 09:22:03,396 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/dispatcher .
2018-11-16 09:22:03,441 INFO  org.apache.flink.runtime.resourcemanager.StandaloneResourceManager  - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership with fencing token 00000000000000000000000000000000
2018-11-16 09:22:03,442 INFO  org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Starting the SlotManager.
2018-11-16 09:22:03,476 INFO  org.apache.flink.runtime.dispatcher.StandaloneDispatcher      - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership with fencing token 00000000-0000-0000-0000-000000000000
2018-11-16 09:22:03,483 INFO  org.apache.flink.runtime.dispatcher.StandaloneDispatcher      - Recovering all persisted jobs.
2018-11-16 09:22:04,506 INFO  org.apache.flink.runtime.resourcemanager.StandaloneResourceManager  - Registering TaskManager with ResourceID c5c7f1655d16bb0f18743804f4b1c72a (akka.tcp://[email protected]:55045/user/taskmanager_0) at ResourceManager

运行example

首先,我们使用netcat来启动本地服务器

$ nc -l 9000

提交Flink工程

$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
Starting execution of program

程序连接到socket端口并等待输入。 可以检查Web界面以验证作业是否按预期运行:
在这里插入图片描述
在这里插入图片描述

检查jps,多了一个CliFrontend进程。

4705 TaskManagerRunner
5330 Jps
5052 CliFrontend
4286 StandaloneSessionClusterEntrypoint

####单词在5秒的时间窗口中计算(处理时间,翻滚窗口)并打印到标准输出。 监视TaskManager的输出文件并在nc中写入一些文本(输入在点击后逐行发送到Flink):

$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye

.out文件会在每个时间窗口的末尾打印计数

$ tail -f log/flink-*-taskexecutor-*.out
lorem : 1
bye : 1
ipsum : 4

停止Flink服务

$ ./bin/stop-cluster.sh

关注公众号

微信公众号(SZBigdata-Club):后续博客的文档都会转到微信公众号中。
1、公众号会持续给大家推送技术文档、学习视频、技术书籍、数据集等。
2、接受大家投稿支持。
3、对于各公司hr招聘的,可以私下联系我,把招聘信息发给我我会在公众号中进行推送。
在这里插入图片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章