單機模式
解壓後直接啓動,不需要修改配置文件
./bin/start-local.sh
開啓監聽端口
nc -l 9000
將代碼上傳後運行(代碼中的依賴改爲 <scope>provided</scope>)
./bin/flink run XX.jar --port 9000
訪問主節點UI網頁 --> node1:8081
關閉任務,先獲取 job id
./bin/flink list
./bin/flink cancel [jobid]
也可以通過頁面上cancel job
通過./bin/flink -h查看命令
standalone模式(需要搭建集羣,node1節點的flink拷貝到其他節點)
修改配置文件
vi flink-conf.yaml(注意yaml文件格式)
jobmanager.rpc.address: localhost --> jobmanager.rpc.address: node1
vi slaves(填寫從節點地址)
node2
node3
啓動集羣
./start-cluster.sh
執行代碼
./bin/flink run XX.jar --port 9000
通過網頁Running Jobs 查看job執行的位置,在相應的機器下
tail -10f flink-root-taskmanager-0-node3.out
on yarn模式(利用yarn搭建集羣)
vi yarm-site.xml
#關閉內存檢測flink,防止啓動報錯
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
開啓hdfs和yarn集羣
**********第一種:在yarn中初始化一個flink集羣,開闢指定的資源*****************
開啓flink集羣
./bin/yarn-session.sh -n 2 -jm 700 -tm 700
***啓動成功***
Number of connected TaskManagers changed to 1. Slots available: 1
Number of connected TaskManagers changed to 2. Slots available: 2
執行任務
./bin/flink run flink-1.0-SNAPSHOT-jar-with-dependencies.jar --port 9009
訪問UI
通過yarn UI界面找到RUNNING任務,點擊任務上的ApplicationMasterTask進入到flink UI界面
通過flink UI界面的Running Jobs找到任務運行在哪臺節點上,在Task Managers點開相應的節點,點擊Stdout可以觀察到輸出
結束開闢的集羣資源
yarn application -kill application_1538559819716_0004
各種報警
in safe mode
啓動hadoop後等待hdfs退出安全模式後再啓動flink集羣
Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of these to be set to properly load the Hadoop configuration for accessing YARN.
需要在環境變量中配置三者之一HADOOP_CONF_DIR YARN_CONF_DIR HADOOP_HOME
export HADOOP_HOME=/export/server/hadoop-2.7.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
第一次啓動報錯--內存不足
./bin/yarn-session.sh -n 2 -jm 1024 -tm 1024
is running beyond virtual memory limits. Current usage: 254.0 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
第二次啓動報錯--配置內存過少
./bin/yarn-session.sh -n 2 -jm 512 -tm 512
The configuration value 'containerized.heap-cutoff-min' is higher (600) than the requested amount of memory 512
*********第二種:每次連接創建一個新的flink集羣,任務之間不會影響(五顆星推薦)****
啓動命令
./bin/flink run -m yarn-cluster -yn 2 -yjm 700 -ytm 700 flink-1.0-SNAPSHOT-jar-with-dependencies.jar --port 9009