【大數據計算引擎-Flink】從WordCount看Flink(上)

老夫被安排寫Flink也有幾個月了,雖然一來就是寫的FlinkSQL一塊的內容,但是寫的似乎都忘了Flink的主流那就是流處理了。正好自己的入門項目也不曾做個Flink的WordCount,再加上老夫最近想看看Flink的源碼程序了,所以,就拿WordCount來走走Flink的流的流程。學習一下Flink的流處理。

本文參考字自 透過源碼

先編寫程序

/**
 * @author Mr.Sun
 * @version v.1.0
 * @title WordCount
 * @description Flink 學習之WordCount入門
 * @date 2019/12/13 9:15
 */
public class WordCount {
    public static void main(String[] args) throws Exception {
        StreamEnv streamEnv = StreamEnv.builder().enableRestart().setParallelism(1).finish();
        StreamExecutionEnvironment env = streamEnv.getEnv();

        DataStream<String> socketTextStream = env.socketTextStream("hadoop103", 5555);

        socketTextStream.flatMap(new WordFlatMapFunction()).setParallelism(1)
                .keyBy(0)
                .sum(1)
                .print() ;

        env.execute("test");
    }

    private static class WordFlatMapFunction implements FlatMapFunction<String , Tuple2<String,Integer>>{
        @Override
        public void flatMap(String word, Collector<Tuple2<String,Integer>> collector) throws Exception {
            collector.collect(new Tuple2(word , 1));
        }
    }
}

這裏的環境獲取老夫是提前寫了一個單例的,您也可以直接在這裏去拿到環境變量。,這裏呢,老夫也把簡單寫的這個單例拿出來:

/**
 * @author Mr.Sun
 * @version v.1.0
 * @title StreamEnv
 * @description
 * @date 2019/12/13 9:09
 */
public class StreamEnv {
    private final StreamExecutionEnvironment env;
    private StreamTableEnvironment tableEnv;
    private Builder builder ;

    private StreamEnv(Builder builder) {
        env = StreamExecutionEnvironment.getExecutionEnvironment();
        if (builder.RESTART){
            env.setRestartStrategy(RestartStrategies.failureRateRestart(
                    3,
                    Time.of(5, TimeUnit.MINUTES),
                    Time.of(10,TimeUnit.SECONDS)
            ));
        }
        if (builder.parallelism != null ){
            env.setParallelism(builder.parallelism);
        }
        env.enableCheckpointing(1000 , CheckpointingMode.EXACTLY_ONCE) ;

        this.builder = builder ;
    }

    public StreamExecutionEnvironment getEnv() {
        return env;
    }

    public StreamTableEnvironment getTableEnv() {
        if(tableEnv == null) {
            tableEnv = StreamTableEnvironment.create(env);
        }
        return tableEnv;
    }
    public static Builder builder() {
        return new Builder();
    }
    public static class Builder{
        private boolean RESTART = false ;
        private Integer parallelism ;

        public Builder enableRestart(){
            this.RESTART = true ;
            return this ;
        }
        public Builder setParallelism(int parallelism){
            this.parallelism = parallelism ;
            return this ;
        }

        public StreamEnv finish(){
            return new StreamEnv(this) ;
        }
    }

程序寫好了,我們便在 hostname爲hadoop103上開啓我們的服務:

nc -l 5555

隨後輸入我們的word,可以看到我們的結果:
demo-test

跟進程序,從環境開始

程序的啓動從這句開始:

env = StreamExecutionEnvironment.getExecutionEnvironment();

這會返回一個我們Flink的流的執行環境,如果要返回批的執行環境就是如下:

env = ExecutionEnvironment.getExecutionEnvironment();

所謂的執行環境是針對整個Flink的程序的執行上下文的,在環境中會存在一些配置,例如,老夫這裏設置的是否開啓失敗重啓策略?是否設置多並行度等等,我們可以進去看看這個環境裏面給我們哪些信息:
LocalStreamEnvironment

這是可以清除的看到,我們的StreamExecutionEnvironment在流中是一個父類,子類包括 LocalStreamEnvironment,RemoteStreamEnvironment,StreamPlanEnvironment,StreamContextEnvironment.
StreamExecutionEnvironment
這張圖給出了StreamExecutionEnvironment裏面的方法,參數,配置等基本信息,圖有點小,建議您自己去看看源碼。
對於分佈式流處理程序來說,我們在代碼中定義瞭如:flatMap(),keyBy,sum()等,事實是是一種可以理解爲聲明的方式,旨在告訴我們程序,這裏我使用了flatMap()算子,但是真正啓動計算的代碼不在這裏,我們的Flink是一個懶加載過程,所以必須要有:env.execute() 去執行我們的Flink程序。由於我們是在本地運行程序,因此這裏會返回一個LocalStreamEnvironment。

算子(Operator)的註冊聲明

上面老夫使用的算子,第一個就是 flatMap(new WordFlatMapFunction())的,我們跟進去一看便知:

    public <R> SingleOutputStreamOperator<R> flatMap(FlatMapFunction<T, R> flatMapper) {

        TypeInformation<R> outType = TypeExtractor.getFlatMapReturnTypes(clean(flatMapper),
                getType(), Utils.getCallLocationName(), true);
        return flatMap(flatMapper, outType);
    }
    public <R> SingleOutputStreamOperator<R> flatMap(FlatMapFunction<T, R> flatMapper, TypeInformation<R> outputType) {
        return transform("Flat Map", outputType, new StreamFlatMap<>(clean(flatMapper)));

    }

我們可以看到,這裏首先是先拿到了一個flatMap算子的輸出類型 outType然後又生成了一個Operator,這也符合Flink的流式計算的核心概念了。就是所謂的鏈式處理。數據流從一輸入流一個個傳遞給Operator進行鏈式處理,最後到我們的輸出。針對數據流的每一次處理就是一次 operator,這裏 operator之間還可以組成一個chain來處理。
Flink如何看待用戶的處理流程呢?官網有這麼一個介紹:
flink-dela-steam
抽象下來就是:
flink-transform
關於這裏面的算子,我們看這個圖,就可以知道 DataStreamSource -> SingleOutputStreamOperator -> DataStream的關係,以及相關的算子方法。
DataStreamSource

程序的執行

前面有提到所謂的算子只是一個聲明,不代表你聲明瞭就會去執行,Flink的執行與Spark相同,都屬於那種懶加載的方式,所以我們需要自己去開啓Flink程序的執行。

// 這裏的參數,是我們的JobName
env.execute("test");

這裏的源碼看的有點蒙:看下源碼

    public JobExecutionResult execute(String jobName) throws Exception {
        Preconditions.checkNotNull(jobName, "Streaming Job name should not be null.");

        return execute(getStreamGraph(jobName));
    }

我既然給了一個jobName , 爲什麼還要去檢查這個jobName是否爲空呢?糾結

本地模式下的execute方法

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-NmgC2kXf-1583834798673)(http://www.source.sun-iot.xyz/flink/code-read/local-execute.png)]
這行代碼主要做這幾件事:

  • 生成StreamGraph,表示流拓撲的類。 它包含爲執行建立作業圖所需的所有信息。

  • 生成JobGraph,這個圖是要交給Flink去生成Task的圖

  • 生成一系列的配置

  • 將JobGraph和配置交給Flink集羣去運行,本地運行會爲我們生成一個最小的集羣環境,也就是我們的本地環境,如果不是本地運行的話,還會把JAR文件通過網絡發給其他節點;

  • 以本地模式運行的話,可以看到啓動過程,如啓動性能度量,WEB模塊,JobManager,ResourceManager,TaskManager等

  • 啓動任務,值得一提的是在啓動任務之前,先啓動了一個用戶類加載器,這個類加載器可以用來做一些在運行時動態加載類的工作。

遠程模式下(RemoteEnvironment) 的execute()方法

這裏老夫還沒有跟進看,就看別人寫的吧

遠程模式的程序執行更加有趣一點。第一步仍然是獲取StreamGraph,然後調用executeRemotely方法進行遠程執行。該方法首先創建一個用戶代碼加載器:

ClassLoader usercodeClassLoader = JobWithJars.buildUserCodeClassLoader(jarFiles, globalClasspaths, getClass().getClassLoader());

然後創建一系列配置,交給Client對象:

    ClusterClient client;
 try{
     client = new StandaloneClusterClient(configuration);
     client.setPrintStatusDuringExecution(getConfig().isSysoutLoggingEnabled());
    }
}
 try{
     return client.run(streamGraph,jarFiles,globalClasspaths,usercodeClassLoader).getJobExecutionResult();
 }

client 方法首先生成一個 JobGraph,然後傳遞給JobClient.關於Client、JobClient、JobManager到底誰管誰,可以看這個:
flink-clinet-jobclient-manager
確切的說,JobClient負責以異步的方式和JobManager通信(Actor是scala的異步模塊),具體的通信任務有JobClientActor完成。相對應的,JobManager的通信任務也有一個Actor完成。

JobListeningContext jobListeningContext = submitJob(actorSystem,
                                                   config,                                   highAvailabilityServices,
                                                   jobGraph,
                                                   timeout,
                                                   sysoutLogUpdates,
                                                   classLoader);

    return awaitJobResult(jobListeningContext);

可以看到,該方法阻塞在awaitJobResult方法上,並最終返回了一個JobListeningContext,透過這個Context可以得到程序運行的狀態和結果.

兩種環境基本就是這樣子的,現在我們的程序執行了execute(),跟進看一下:

public JobExecutionResult execute(StreamGraph streamGraph) throws Exception {
        // 這裏生成一個JobGraph
        JobGraph jobGraph = streamGraph.getJobGraph();
        jobGraph.setAllowQueuedScheduling(true);
        // 生成配置文件
        Configuration configuration = new Configuration();
        configuration.addAll(jobGraph.getJobConfiguration());
        configuration.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "0");
        // add (and override) the settings with what the user defined
        configuration.addAll(this.configuration);

        if (!configuration.contains(RestOptions.BIND_PORT)) {
            configuration.setString(RestOptions.BIND_PORT, "0");
        }

        int numSlotsPerTaskManager = configuration.getInteger(TaskManagerOptions.NUM_TASK_SLOTS, jobGraph.getMaximumParallelism());
        // 生成一個最小的cluster配置
        MiniClusterConfiguration cfg = new MiniClusterConfiguration.Builder()
            .setConfiguration(configuration)
            .setNumSlotsPerTaskManager(numSlotsPerTaskManager)
            .build();

        if (LOG.isInfoEnabled()) {
            LOG.info("Running job on local embedded Flink mini cluster");
        }
        // 提交到最小的cluster去執行程序
        MiniCluster miniCluster = new MiniCluster(cfg);

        try {
            // 啓動集羣,包括啓動JobMaster,進行leader選舉等等
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());
            // 提交任務到JobMaster
            return miniCluster.executeJobBlocking(jobGraph);
        }
        finally {
            transformations.clear();
            miniCluster.close();
        }
    }

我們再看一下這個提交JobMaster的方法,跟進去看一下:

    public JobExecutionResult executeJobBlocking(JobGraph job) throws JobExecutionException, InterruptedException {
        Preconditions.checkNotNull(job, "job is null");
        // 在這裏,我們提交了我們的jobMaster
        CompletableFuture<JobSubmissionResult> submissionFuture = this.submitJob(job);
        CompletableFuture jobResultFuture = submissionFuture.thenCompose((ignored) -> {
            return this.requestJobResult(job.getJobID());
        });

        JobResult jobResult;
        try {
            jobResult = (JobResult)jobResultFuture.get();
        } catch (ExecutionException var7) {
            throw new JobExecutionException(job.getJobID(), "Could not retrieve JobResult.", ExceptionUtils.stripExecutionException(var7));
        }

        try {
            return jobResult.toJobExecutionResult(Thread.currentThread().getContextClassLoader());
        } catch (ClassNotFoundException | IOException var6) {
            throw new JobExecutionException(job.getJobID(), var6);
        }
    }

現在找到了這個提交的方法 submitJob(),我們再進去瞅瞅:

	public CompletableFuture<JobSubmissionResult> submitJob(JobGraph jobGraph) {
		final CompletableFuture<DispatcherGateway> dispatcherGatewayFuture = getDispatcherGatewayFuture();

		// we have to allow queued scheduling in Flip-6 mode because we need to request slots
		// from the ResourceManager
		jobGraph.setAllowQueuedScheduling(true);

		final CompletableFuture<InetSocketAddress> blobServerAddressFuture = createBlobServerAddress(dispatcherGatewayFuture);

		final CompletableFuture<Void> jarUploadFuture = uploadAndSetJobFiles(blobServerAddressFuture, jobGraph);

		final CompletableFuture<Acknowledge> acknowledgeCompletableFuture = jarUploadFuture
			.thenCombine(
				dispatcherGatewayFuture,
				(Void ack, DispatcherGateway dispatcherGateway) ->
				// 這裏開始提交 
				dispatcherGateway.submitJob(jobGraph, rpcTimeout))
			.thenCompose(Function.identity());

		return acknowledgeCompletableFuture.thenApply(
			(Acknowledge ignored) -> new JobSubmissionResult(jobGraph.getJobID()));
	}

這裏的Dispatcher 是一個接收job,然後指派JobMaster去啓動任務的類,我們看看他的這個實現類,在本地下啓動的是 MiniDispatcher , 在集羣上提交任務時,集羣上啓動的是 StandaloneDispatcher.
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-kHb4KLXi-1583834798676)(http://www.source.sun-iot.xyz/flink/code-read/Dispatcher.png)]

這裏的Dispatcher啓動了一個JobManagerRunner , 委託JobManagerRunner去啓動該Job的JobMater。看程序:

	@Override
	public CompletableFuture<Acknowledge> submitJob(JobGraph jobGraph, Time timeout) {
		log.info("Received JobGraph submission {} ({}).", jobGraph.getJobID(), jobGraph.getName());

		try {
			if (isDuplicateJob(jobGraph.getJobID())) {
				return FutureUtils.completedExceptionally(
					new JobSubmissionException(jobGraph.getJobID(), "Job has already been submitted."));
			} else if (isPartialResourceConfigured(jobGraph)) {
				return FutureUtils.completedExceptionally(
					new JobSubmissionException(jobGraph.getJobID(), "Currently jobs is not supported if parts of the vertices have " +
							"resources configured. The limitation will be removed in future versions."));
			} else {
				return internalSubmitJob(jobGraph);
			}
		} catch (FlinkException e) {
			return FutureUtils.completedExceptionally(e);
		}
	}

我們繼續跟進一下:

	private CompletableFuture<Acknowledge> internalSubmitJob(JobGraph jobGraph) {
		log.info("Submitting job {} ({}).", jobGraph.getJobID(), jobGraph.getName());

		final CompletableFuture<Acknowledge> persistAndRunFuture = waitForTerminatingJobManager(jobGraph.getJobID(), jobGraph, this::persistAndRunJob)
			.thenApply(ignored -> Acknowledge.get());

		return persistAndRunFuture.handleAsync((acknowledge, throwable) -> {
			if (throwable != null) {
				cleanUpJobData(jobGraph.getJobID(), true);

				final Throwable strippedThrowable = ExceptionUtils.stripCompletionException(throwable);
				log.error("Failed to submit job {}.", jobGraph.getJobID(), strippedThrowable);
				throw new CompletionException(
					new JobSubmissionException(jobGraph.getJobID(), "Failed to submit job.", strippedThrowable));
			} else {
				return acknowledge;
			}
		}, getRpcService().getExecutor());
	}

可以看到,這裏有一個 persistAndRunJob方法,我們可以看一下:

	private CompletableFuture<Void> runJob(JobGraph jobGraph) {
		Preconditions.checkState(!jobManagerRunnerFutures.containsKey(jobGraph.getJobID()));

		final CompletableFuture<JobManagerRunner> jobManagerRunnerFuture = createJobManagerRunner(jobGraph);

		jobManagerRunnerFutures.put(jobGraph.getJobID(), jobManagerRunnerFuture);

		return jobManagerRunnerFuture
			.thenApply(FunctionUtils.nullFn())
			.whenCompleteAsync(
				(ignored, throwable) -> {
					if (throwable != null) {
						jobManagerRunnerFutures.remove(jobGraph.getJobID());
					}
				},
				getMainThreadExecutor());
	}

	private CompletableFuture<JobManagerRunner> createJobManagerRunner(JobGraph jobGraph) {
		final RpcService rpcService = getRpcService();

		final CompletableFuture<JobManagerRunner> jobManagerRunnerFuture = CompletableFuture.supplyAsync(
			CheckedSupplier.unchecked(() ->
				jobManagerRunnerFactory.createJobManagerRunner(
					jobGraph,
					configuration,
					rpcService,
					highAvailabilityServices,
					heartbeatServices,
					jobManagerSharedServices,
					new DefaultJobManagerJobMetricGroupFactory(jobManagerMetricGroup),
					fatalErrorHandler)),
			rpcService.getExecutor());

		return jobManagerRunnerFuture.thenApply(FunctionUtils.uncheckedFunction(this::startJobManagerRunner));
	}

可以看到,其實這裏最後是創建了一個 JobManagerRunner , 我們在進這個 JobManagerRunner 裏面:

	private CompletableFuture<Void> verifyJobSchedulingStatusAndStartJobManager(UUID leaderSessionId) {
		final CompletableFuture<JobSchedulingStatus> jobSchedulingStatusFuture = getJobSchedulingStatus();

		return jobSchedulingStatusFuture.thenCompose(
			jobSchedulingStatus -> {
				if (jobSchedulingStatus == JobSchedulingStatus.DONE) {
					return jobAlreadyDone();
				} else {
					return startJobMaster(leaderSessionId);
				}
			});
	}

我們會找到這個方法,不管別的嗎他一定 啓動了 JobMaster 。

小結

  1. 客戶端代碼的execute方法執行;
  2. 本地環境下,MiniCluster完成了大部分任務,直接把任務委派給了MiniDispatcher;
    遠程環境下,啓動了一個 RestClusterClient ,這個類會以HTTP Rest的方式把用戶代碼
    提交到集羣上;
  3. 遠程環境下,請求發到集羣上之後,必然有個handler去處理,在這裏
    是 JobSubmitHandler 。這個類接手了請求後,委派StandaloneDispatcher啓動job,到
    這裏之後,本地提交和遠程提交的邏輯往後又統一了;
  4. Dispatcher接手job之後,會實例化一個 JobManagerRunner ,然後用這個runner啓動
    job;
  5. JobManagerRunner接下來把job交給了 JobMaster 去處理;
  6. JobMaster使用 ExecutionGraph 的方法啓動了整個執行圖;整個任務就啓動起來了。
    至此,第一部分就講完了。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章