pom
完整pom
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.msb</groupId>
<artifactId>StudyFlink</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<flink.version>1.9.2</flink.version>
<scala.version>2.11.8</scala.version>
<redis.version>3.2.0</redis.version>
<hbase.version>1.3.3</hbase.version>
<mysql.version>5.1.44</mysql.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>flink-connector-redis_2.11</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>${redis.version}</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${mysql.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-filesystem_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-scala-bridge_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- 在maven項目中既有java又有scala代碼時配置 maven-scala-plugin 插件打包時可以將兩類代碼一起打包 -->
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- maven 打jar包需要插件 -->
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<configuration>
<!-- 設置false後是去掉 MySpark-1.0-SNAPSHOT-jar-with-dependencies.jar 後的 “-jar-with-dependencies” -->
<!--<appendAssemblyId>false</appendAssemblyId>-->
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
scala代碼
完整代碼
package com.zxl.stream
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.api.scala._
object WordCount {
def main(args: Array[String]): Unit = {
//準備環境
/**
* createLocalEnvironment 創建一個本地執行的環境 local
* createLocalEnvironmentWithWebUI 創建一個本地執行的環境 同時還開啓Web UI的查看端口 8081
* getExecutionEnvironment 根據你執行的環境創建上下文,比如local cluster
*/
val env = StreamExecutionEnvironment.getExecutionEnvironment
/**
* DataStream:一組相同類型的元素 組成的數據流
* 如果數據源是scoket 並行度只能是1
*/
val initStream:DataStream[String] = env.socketTextStream("node01",8888)
val wordStream = initStream.flatMap(_.split(" ")).setParallelism(3)
val pairStream = wordStream.map((_,1)).setParallelism(3)
val keyByStream = pairStream.keyBy(0)
val restStream = keyByStream.sum(1).setParallelism(3)
restStream.print()
/**
* 6> (msb,1)
* 1> (,,1)
* 3> (hello,1)
* 3> (hello,2)
* 6> (msb,2)
* 默認就是有狀態的計算
* 6> 代表是哪一個線程處理的
* 相同的數據一定是由某一個thread處理
**/
//啓動Flink 任務
env.execute("first flink job")
}
}
啓動測試
本地啓動
先啓動8888端口
nc -lk 8888
運行main方法
實時輸入數據,就會進行流計算
默認就是有狀態的計算:上次的計算結果給保留了。
* 6> (msb,1)
* 1> (,,1)
* 3> (hello,1)
* 3> (hello,2)
* 6> (msb,2)
* 默認就是有狀態的計算
* 6> 代表是哪一個線程處理的
* 相同的數據一定是由某一個thread處理
線程數並不是越多越好,線程多了可能啓動線程的時間比執行計算用的時間還要多。
並行度爲1,只啓東一個線程來處理:
此時前面就沒有線程號了:
集羣環境運行jar
package打包
選擇這個jar包:不要選擇帶依賴的,因爲集羣環境中已經有這些jar包了,否則就重複了
使用命令提交任務
將jar包上傳到節點上,執行如下命令:
- -c 指定主類
- -d 守護進程方式運行
flink run -c 主類 -d jar包路徑
查看web ui
的Running Jobs
發送數據:
點進去:
可以看到輸出:
使用web ui提交任務
可以關閉web ui
提交任務:默認是true
開啓的
vim conf/flink-conf.yaml
web.submit.enable: false #關閉