Flink JAR包上傳和運行邏輯

說明

  1. 目標:走讀Flink Clint中Upload jar、Run jar相關代碼
  2. 源碼版本:1.6.1
  3. 部屬模式:Standalone
  4. 相關知識點:Netty、 CompletedFuture

啓動ResetServer

RestServerEndpoint.start

註冊Handler

代碼From DispatcherRestEndpoint.java

protected List<Tuple2<RestHandlerSpecification, ChannelInboundHandler>> initializeHandlers(CompletableFuture<String> restAddressFuture) {
		List<Tuple2<RestHandlerSpecification, ChannelInboundHandler>> handlers = super.initializeHandlers(restAddressFuture);
    ...

	JobSubmitHandler jobSubmitHandler = new JobSubmitHandler(
		restAddressFuture,
		leaderRetriever,
		timeout,
		responseHeaders,
		executor,
		clusterConfiguration);

	if (clusterConfiguration.getBoolean(WebOptions.SUBMIT_ENABLE)) {
		try {
		    // 此處註冊了JAR Upload和Run的處理方法
			webSubmissionExtension = WebMonitorUtils.loadWebSubmissionExtension(
				leaderRetriever,
				restAddressFuture,
				timeout,
				responseHeaders,
				uploadDir,
				executor,
				clusterConfiguration);

			// register extension handlers
			handlers.addAll(webSubmissionExtension.getHandlers());
		} catch (FlinkException e) {
		...
		}
	} else {
		log.info("Web-based job submission is not enabled.");
	}

    ...

	return handlers;
}

在WebSubmissionExtension中,可以看到定義了Upload、Run、List、Delete、Plan的Handler

Upload JAR

處理代碼在JarUploadHandler的handleRequest方法中。

Jar包存放路徑:

jarDir.resolve(UUID.randomUUID() + "_" + fileUpload.getFileName());

方法本身邏輯簡單,比較隱蔽的是jarDir的值。通過倒推尋找該值的賦值過程。

  1. JarUploadHandler 構造時賦值屬性jarDir;
  2. JarUploadHandler由WebSubmissionExtension通過WebMonitorUtils.loadWebSubmissionExtension構造,jarDir源自父類RestServerEndpoint中的變量uploadDir;
  3. RestServerEndpoint中uploadDir通過configuration.getUploadDir()初始化
  4. 在RestServerEndpointConfiguration中找到了源頭:
    final Path uploadDir = Paths.get(
    	config.getString(WebOptions.UPLOAD_DIR,	config.getString(WebOptions.TMP_DIR)),
    	"flink-web-upload");
    

一般情況下,大家都不會改寫配置項WebOption.UPLOAD_DIR(對應配置項“web.upload.dir”),所以JAR包存放到了"$WebOptions.TMP_DIR/flink-web-upload"

WebOptions.TMP_DIR的賦值比較隱蔽,只從配置文件看,是在/tmp目錄。但是在ClusterEntrypoint的generateClusterConfiguration中,其實對該值進行了改寫:

final String webTmpDir = configuration.getString(WebOptions.TMP_DIR);
final File uniqueWebTmpDir = new File(webTmpDir, "flink-web-" + UUID.randomUUID());

resultConfiguration.setString(WebOptions.TMP_DIR, uniqueWebTmpDir.getAbsolutePath());

最終的效果JAR包存放目錄是"/tmp/flink-web-UUID/flink-web-upload"

存放在tmp目錄裏面是有風險的,過期後會被刪除。

Run Jar

同上,重點關注JarRunHandler的handleRequest

@Override
protected CompletableFuture<JarRunResponseBody> handleRequest(
		@Nonnull final HandlerRequest<JarRunRequestBody, JarRunMessageParameters> request,
		@Nonnull final DispatcherGateway gateway) throws RestHandlerException {
    ...

    # 產生JobGraph
	final CompletableFuture<JobGraph> jobGraphFuture = getJobGraphAsync(
		jarFile,
		entryClass,
		programArgs,
		savepointRestoreSettings,
		parallelism);

	CompletableFuture<Integer> blobServerPortFuture = gateway.getBlobServerPort(timeout);

    # Jar上傳JobGraph,UserJar和UserArtifact
	CompletableFuture<JobGraph> jarUploadFuture = jobGraphFuture.thenCombine(blobServerPortFuture, (jobGraph, blobServerPort) -> {
		final InetSocketAddress address = new InetSocketAddress(gateway.getHostname(), blobServerPort);
		try {
			ClientUtils.extractAndUploadJobGraphFiles(jobGraph, () -> new BlobClient(address, configuration));
		} catch (FlinkException e) {
			throw new CompletionException(e);
		}

		return jobGraph;
	});

	CompletableFuture<Acknowledge> jobSubmissionFuture = jarUploadFuture.thenCompose(jobGraph -> {
		// we have to enable queued scheduling because slots will be allocated lazily
		jobGraph.setAllowQueuedScheduling(true);
		# 提交Job
		return gateway.submitJob(jobGraph, timeout);
	});

	return jobSubmissionFuture
		.thenCombine(jarUploadFuture, (ack, jobGraph) -> new JarRunResponseBody(jobGraph.getJobID()))
		.exceptionally(throwable -> {
			throw new CompletionException(new RestHandlerException(
				throwable.getMessage(),
				HttpResponseStatus.INTERNAL_SERVER_ERROR,
				throwable));
		});
}

生成JobGraph的過程

/* 在JarRunHandler的getJobGraphAsync中構造了PackagedProgram */
final PackagedProgram packagedProgram = new PackagedProgram(
		jarFile.toFile(),
		entryClass,
		programArgs.toArray(new String[programArgs.size()]));
		jobGraph = PackagedProgramUtils.createJobGraph(packagedProgram, configuration, parallelism);
/* From PackagedProgramUtils.java */
public static JobGraph createJobGraph(
	PackagedProgram packagedProgram,
	Configuration configuration,
	int defaultParallelism) throws ProgramInvocationException {
    ....

	if (packagedProgram.isUsingProgramEntryPoint()) {
		...
	} else if (packagedProgram.isUsingInteractiveMode()) {
	    /* 一般提交的流程序會走這個分支,判斷原則是用戶程序的main Class是否isAssignableFrom ProgramDescription */
		final OptimizerPlanEnvironment optimizerPlanEnvironment = new OptimizerPlanEnvironment(optimizer);

		optimizerPlanEnvironment.setParallelism(defaultParallelism);

        // 會觸發main函數調用
		flinkPlan = optimizerPlanEnvironment.getOptimizedPlan(packagedProgram);
	} else {
		throw new ProgramInvocationException("PackagedProgram does not have a valid invocation mode.");
	}

	if (flinkPlan instanceof StreamingPlan) {
	    // 獲取JobGraph
		jobGraph = ((StreamingPlan) flinkPlan).getJobGraph();
		jobGraph.setSavepointRestoreSettings(packagedProgram.getSavepointSettings());
	} else {
		...
	}

    ...

	return jobGraph;
}

調用用戶程序main方法

/* From OptimizerPlanEnvironment.java */
public FlinkPlan getOptimizedPlan(PackagedProgram prog) throws ProgramInvocationException {
    ...
    
    /* 設置ContextEnviormentFacoty對應的env爲OptimizerPlanEnvironment */
	setAsContext();
	try {
	    /* 調用用戶程序main方法 */
		prog.invokeInteractiveModeForExecution();
	}
	...
}

執行用戶程序main方法

// 一個常見的main 結構
public static void main(String[] args) throws Exception {
    /* 此處獲取的是上一步setAsContext中設置的OptimizerPlanEnvironment */
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    ...
    /* 對應的是執行OptimizerPlanEnvironment的execute */
    env.execute();
}

執行execute (和接觸過一個概念很類似-打樁測試)

public JobExecutionResult execute(String jobName) throws Exception {
    /* 反饋Compile後的FlinkPlan */
	Plan plan = createProgramPlan(jobName);
	this.optimizerPlan = compiler.compile(plan);

    // execute後不要帶其他的用戶程序
	// do not go on with anything now!
	throw new ProgramAbortException();
}

提交JobGraph

OK,已經得到了JobGraph,再細看提交JobGraph的過程

/* From Dispatcher.java */
public CompletableFuture<Acknowledge> submitJob(JobGraph jobGraph, Time timeout) {

	...

	if (jobSchedulingStatus == RunningJobsRegistry.JobSchedulingStatus.DONE || jobManagerRunnerFutures.containsKey(jobId)) {
		return FutureUtils.completedExceptionally(
			new JobSubmissionException(jobId, String.format("Job has already been submitted and is in state %s.", jobSchedulingStatus)));
	} else {
	    //重點關注persistAndRunJob
		final CompletableFuture<Acknowledge> persistAndRunFuture = waitForTerminatingJobManager(jobId, jobGraph, this::persistAndRunJob)
			.thenApply(ignored -> Acknowledge.get());

		return persistAndRunFuture.exceptionally(
			(Throwable throwable) -> {
				final Throwable strippedThrowable = ExceptionUtils.stripCompletionException(throwable);
				log.error("Failed to submit job {}.", jobId, strippedThrowable);
				throw new CompletionException(
					new JobSubmissionException(jobId, "Failed to submit job.", strippedThrowable));
			});
	}
}

省略一些方法間調用,調用順序如下:

  1. Dispatch.persistAndRunJob
  2. Dispatch.runJob
  3. Dispatch.createJobManagerRunner,創建JobMaster
  4. JobMaster.createAndRestoreExecutionGraph
    終於看到了ExecutionGraph

ExectionGraph Deploy的過程

方法間調用關係:

  1. 上接Dispatcher.createJobManagerRunner
  2. Dispatcher.startJobManagerRunner
  3. JobManagerRunner.start
  4. StandaloneLeaderElectionService.start
  5. JobManagerRunner.grantLeadership
  6. JobManagerRunner.verifyJobSchedulingStatusAndStartJobManager
  7. JobMaster.start
  8. JobMaster.startJobExecution
  9. JobMaster.resetAndScheduleExecutionGraph
  10. JobMaster.scheduleExecutionGraph
  11. ExecutionGraph.scheduleForExecution
  12. ExecutionGraph.scheduleEager
  13. Execution.deploy
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章