flink入門實踐批處理本地文件,socket,kafka流數據處理(java版 )

1.flink基本簡介,詳細介紹https://www.cnblogs.com/davidwang456/p/11256748.html

Apache Flink是一個框架和分佈式處理引擎,用於對無界(無界流數據通常要求以特定順序攝取,例如事件發生的順序)和有界數據流(不需要有序攝取,因爲可以始終對有界數據集進行排序)進行有狀態計算。Flink設計爲在所有常見的集羣環境中運行,以內存速度和任何規模執行計算.核心角色就兩個:jobmanager /taskManager

 

2.安裝部署

官網:https://flink.apache.org/downloads.html

wget http://mirror.bit.edu.cn/apache/flink/flink-1.9.2/flink-1.9.2-bin-scala_2.11.tgz 然後解壓後進入bin目錄啓動 ./start-cluster.sh 

瀏覽器訪問:http://ip:8081 進入flink的dashboard,可以查看當前job運行狀態結果,添加新的job等

3.使用官方的demo啓動一個job

1.首先安裝一個socket工具: yum install nmap-ncat.x86_64 ,nc -l 9001  啓動一個交互終端;

2. 換個窗口,啓動flink的demo監聽9001: /bin/flink run examples/streaming/SocketWindowWordCount.jar  --port 9001 ,然後dashboard就可以可以看到這個job
3.這個job會將結果寫入安裝目錄log下,文件名規則flink-用戶名-taskexecutor-任務號-服務器ip.out,tail -22f flink-root-taskexecutor-1-192.168.203.131.out
4.然後到步驟1開啓終端輸入數據回車,步驟3即可看到flink處理後並輸出到日誌文件的結果

4.自定義flink job : socket word count——StreamExecutionEnvironment

idea創建一個普通maven項目,非springboot,pom如下,指定打jar包,版本>= flink dashboard的版本一般沒有問題。


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>test</groupId>
    <artifactId>test</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging> 
    
     <properties>
        <flink.version>1.10.0</flink.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
      
    </dependencies>

</project>

建個包test.flink,類SocketWindowWordCount,官方的demo類,main方法指定要連接的socket ip與port

package test.flink;/*

import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.util.Collector;

@SuppressWarnings("serial")
public class SocketWindowWordCount {
    public static void main(String[] args) throws Exception {
        final String hostname;
        final int port;
        try {
            final ParameterTool params = ParameterTool.fromArgs(args);
            hostname = params.has("hostname") ? params.get("hostname") : "192.168.203.131";
            port = params.has("port")?params.getInt("port"):9001;
        } catch (Exception e) {
            System.err.println("No port specified. Please run 'SocketWindowWordCount " +
                    "--hostname <hostname> --port <port>', where hostname (localhost by default) " +
                    "and port is the address of the text server");
            System.err.println("To start a simple text server, run 'netcat -l <port>' and " +
                    "type the input text into the command line");
            return;
        }
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStream<String> text = env.socketTextStream(hostname,port,"\n");
        //通過text對象轉換得到新的DataStream對象,
        //轉換邏輯是分隔每個字符串,取得的所有單詞都創建一個WordWithCount對象
        DataStream<WordWithCount> windowCounts = text.flatMap(new FlatMapFunction<String, WordWithCount>() {
            @Override
            public void flatMap(String s, Collector<WordWithCount> collector) throws Exception {
                for(String word : s.split("\\s")){  //按空格空字符切分單次
                    collector.collect(new WordWithCount(word, 1L));
                }
            }
        })
                .keyBy("word")//key爲word字段
                .timeWindow(Time.seconds(5))	//五秒一次的翻滾時間窗口
                .reduce(new ReduceFunction<WordWithCount>() { //reduce策略
                    @Override
                    public WordWithCount reduce(WordWithCount a, WordWithCount b) throws Exception {
                        return new WordWithCount(a.word, a.count+b.count);
                    }
                });


        //單線程輸出結果
        windowCounts.print().setParallelism(1);

        // 執行job
        env.execute("Flink Streaming Java API Skeleton");
    }

    //pojo
    public static class WordWithCount {

        public String word;
        public long count;

        public WordWithCount() {}

        public WordWithCount(String word, long count) {
            this.word = word;
            this.count = count;
        }

        @Override
        public String toString() {
            return word + " : " + count;
        }
    }
}

啓動自定義JOB

1. 本地main方法啓動job,nc -l 9001輸入數據即可在控制檯輸出結果,本地job與flink dashbord無關的。

2.Linux服務器flink命令如demo啓動job:使用命令./bin/flink ru-c test.flink.SocketWindowWordCount  test-1.0-SNAPSHOT.jar -hostname 127.0.0.1  --port 9001  指定main方法位置,main方法的兩個參數hostname /port

3.使用dashbord 的Submit New Job功能上傳本地jar,並指定main方法位置及其它參數進行啓動

除了日誌文件這裏也可以看到處理結果

5.自定義flink job:localfile word count(啓動同上,在同一個jar中,有限流批處理完自動退出任務)——ExecutionEnvironment

package test.flink;


import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.aggregation.Aggregations;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.util.Collector;

public class LocalFileWordCount {

    public static void main(String[] args) throws Exception {

        final ParameterTool params = ParameterTool.fromArgs(args);

        final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

        env.getConfig().setGlobalJobParameters(params);

        //獲取有限流-批處理數據
        DataSet<String> text = env.readTextFile(params.get("input"));

        DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Splitter()) // split up the lines in pairs (2-tuples) containing: (word,1)

                .groupBy(0).aggregate(Aggregations.SUM, 1);// group by the tuple field "0" and sum up tuple field "1"

        counts.writeAsText(params.get("output"));

        env.execute("本地文件word count");

    }

}



@SuppressWarnings("serial")
class Splitter implements FlatMapFunction<String, Tuple2<String, Integer>> {

    @Override

    public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {

        String[] tokens = value.split("\\W+");

        for (String token : tokens) {

            if (token.length() > 0) {

                out.collect(new Tuple2<String, Integer>(token, 1));

            }

        }

    }

}

6.自定義flink job:kafka word count——StreamExecutionEnvironment

操作套路基本同上,只是數據源變了。kafka配置參數較多,建議外部文件引入而不是腳本參數引入ParameterTool fromArgs = ParameterTool.fromPropertiesFile("/home/kafka/kafka.properties")。處理後的數據可能不只是寫本地文件,一般存入redis,es,hdfs,db等下游組件,需要導入連接相關的依賴,參考https://blog.csdn.net/boling_cavalry/article/details/85549434?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.11_2.12</artifactId>
            <version>${flink.version}</version>
        </dependency>
package test.flink;

import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011;
import org.apache.flink.table.descriptors.Kafka;
import org.apache.flink.util.Collector;
import java.util.Properties;

public class KafkaFlinkStream {
    public static void main(String[] args) throws Exception {
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.enableCheckpointing(5000); // 非常關鍵,一定要設置啓動檢查點!!
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
        //kafka屬性配置
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.0.101:9092");
        props.setProperty("group.id", "demo");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");  //key 反序列化
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("auto.offset.reset", "latest"); //value 反序列化
        FlinkKafkaConsumer011<String> consumer = new FlinkKafkaConsumer011<String>("demo",
                new SimpleStringSchema(),  // String 序列化
                props);

        DataStream<WordWithCount> kafkaCount = env.addSource(consumer).flatMap(new FlatMapFunction<String, WordWithCount>() {
            @Override
            public void flatMap(String s, Collector<WordWithCount> collector) throws Exception {
                for (String word : s.split("\\s")) {  //按空格空字符切分單詞
                    collector.collect(new WordWithCount(word, 1L));
                }
            }    
        })      .filter(wordWithCount ->wordWithCount.word.length()>0) //過濾
                .keyBy("word")//key爲word字段
                .reduce(new ReduceFunction<WordWithCount>() { //reduce累加
                    @Override
                    public WordWithCount reduce(WordWithCount a, WordWithCount b) throws Exception {
                        return new WordWithCount(a.word, a.count + b.count);
                    }
                });
        kafkaCount.print();
        env.execute("Flink-Kafka測試");


    }

    public static class WordWithCount {

        public String word;
        public long count;

        public WordWithCount() {
        }

        public WordWithCount(String word, long count) {
            this.word = word;
            this.count = count;
        }

        @Override
        public String toString() {
            return word + " : " + count;
        }
    }

}

測試方式:./kafka-console-producer.sh --broker-list localhost:9092 --topic demo

7.api總結

核心api如map flatmap reduce filter collector等基本類似jdk8,其它元祖Tupl1>>>Tupl25最高25元祖。其它優質API詳解參考:https://blog.csdn.net/u014252478/article/details/102516060?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522158358898019725222419850%2522%252C%2522scm%2522%253A%252220140713.130056874..%2522%257D&request_id=158358898019725222419850&biz_id=0&utm_source=distribute.pc_search_result.none-task

https://blog.csdn.net/u014252478/article/details/102516060?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522158358898019725222419850%2522%252C%2522scm%2522%253A%252220140713.130056874..%2522%257D&request_id=158358898019725222419850&biz_id=0&utm_source=distribute.pc_search_result.none-task

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章