調試Local模式下帶狀態的Flink任務

調試Local模式下帶狀態的Flink任務

Flink版本: 1.8.0

Scala版本: 2.11

Github地址:https://github.com/shirukai/debug-flink-state-example.git

在本地開發帶狀態的Flink任務時,經常會遇到這樣的問題,需要驗證狀態是否生效?以及重啓應用之後,狀態裏的數據能否從checkpoint的恢復?首先要明確的是,Flink重啓時不會自動加載狀態,需要我們手動指定checkpoint路徑。筆者從Spark的Structured Streaming轉到Flink的時候,就遇到這樣的問題。在Spark中,我們使用的狀態信息會隨着程序再次啓動時自動被加載出來。所以當時以爲Flink狀態也會被自動加載,在開發有狀態算子時,測試重啓應用之後,並沒有繼續上一次的狀態。一開始以爲是checkpoint的設置的問題,調試了好長時間,發現flink需要手動指定checkpoint路徑。本篇文章,將從搭建項目到編寫帶狀態的任務,介紹如何在IDEA中調試local模式下帶狀態的flink任務。

1 基於官方模板快速創建Flink項目

Flink提供了Meven模板,能夠幫助我們快速創建Maven項目。執行如下命令快速創建一個flink項目:

mvn archetype:generate -DarchetypeGroupId=org.apache.flink -DarchetypeArtifactId=flink-quickstart-scala -DarchetypeVersion=1.8.0 -DgroupId=debug.flink.state.example -DartifactId=debug-flink-state-example -Dversion=1.0 -Dpackage=debug.flink.state.example -DinteractiveMode=false

項目創建完成後,使用IDEA打開項目。

對pom.xml稍微做一下修改。

2 編寫一個有狀態簡單任務

這裏我們編寫一個簡單的Flink任務,實現功能如下

  1. 從SocketTextStream中實時接收文本內容
  2. 將接收到文本轉換爲事件樣例類,事件樣例類包含三個字段id、value、time
  3. 事件按照id進行KeyBy之後,使用process function統計每種事件的個數和value值的總和
  4. 控制檯輸出統計結果

邏輯比較簡單,直接貼代碼吧。

package debug.flink.state.example

import org.apache.flink.api.common.state.{ValueState, ValueStateDescriptor}
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.api.scala._
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.functions.KeyedProcessFunction
import org.apache.flink.util.Collector

/**
 * 實時計算事件總個數,以及value總和
 *
 * @author shirukai
 */

object EventCounterJob {

  def main(args: Array[String]): Unit = {
    // 獲取執行環境
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

    // 1. 從socket中接收文本數據
    val streamText: DataStream[String] = env.socketTextStream("127.0.0.1", 9000)

    // 2. 將文本內容按照空格分割轉換爲事件樣例類
    val events = streamText.map(s => {
      val tokens = s.split(" ")
      Event(tokens(0), tokens(1).toDouble, tokens(2).toLong)
    })
    // 3. 按照時間id分區,然後進行聚合統計
    val counterResult = events.keyBy(_.id).process(new EventCounterProcessFunction)

    // 4. 結果輸出到控制檯
    counterResult.print()

    env.execute("EventCounterJob")
  }
}

/**
 * 定義事件樣例類
 *
 * @param id    事件類型id
 * @param value 事件值
 * @param time  事件時間
 */
case class Event(id: String, value: Double, time: Long)

/**
 * 定義事件統計器樣例類
 *
 * @param id    事件類型id
 * @param sum   事件值總和
 * @param count 事件個數
 */
case class EventCounter(id: String, var sum: Double, var count: Int)

/**
 * 繼承KeyedProcessFunction實現事件統計
 */
class EventCounterProcessFunction extends KeyedProcessFunction[String, Event, EventCounter] {
  private var counterState: ValueState[EventCounter] = _

  override def open(parameters: Configuration): Unit = {
    super.open(parameters)
    // 從flink上下文中獲取狀態
    counterState = getRuntimeContext.getState(new ValueStateDescriptor[EventCounter]("event-counter", classOf[EventCounter]))
  }

  override def processElement(i: Event,
                              context: KeyedProcessFunction[String, Event, EventCounter]#Context,
                              collector: Collector[EventCounter]): Unit = {

    // 從狀態中獲取統計器,如果統計器不存在給定一個初始值
    val counter = Option(counterState.value()).getOrElse(EventCounter(i.id, 0.0, 0))

    // 統計聚合
    counter.count += 1
    counter.sum += i.value

    // 發送結果到下游
    collector.collect(counter)

    // 保存狀態
    counterState.update(counter)

  }
}

使用nc命令監聽9000端口

nl -lk 9000

啓動flink任務,並模擬如下數據發送

event-1 1 1591695864473
event-1 12 1591695864474
event-2 8 1591695864475
event-1 10 1591695864476
event-2 50 1591695864477
event-1 6 1591695864478

效果如下動圖所示:

3 配置Checkpoint

上一步我們已經編寫了一個有狀態的簡單任務,但是狀態並沒有被持久化,程序重啓之後狀態會丟失。這時候我們需要給flink任務配置checkpoint。需要簡單配置3個地方:

  1. 開啓checkpoint,並設置做兩個checkpoint的間隔
  2. 設置取消任務時自動保存checkpoint
  3. 設置基於文件的狀態後端
    // 配置checkpoint
    // 做兩個checkpoint的間隔爲1秒
    env.enableCheckpointing(1000)
    // 表示下 Cancel 時是否需要保留當前的 Checkpoint,默認 Checkpoint 會在整個作業 Cancel 時被刪除。Checkpoint 是作業級別的保存點。
    env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
    // 設置狀態後端:MemoryStateBackend、FsStateBackend、RocksDBStateBackend,這裏設置基於文件的狀態後端
    env.setStateBackend(new FsStateBackend("file:///tmp/checkpoints/event-counter"))

image-20200613155124533

啓動程序,同樣模擬數據發送。

這次先發送前三條數據

event-1 1 1591695864473
event-1 12 1591695864474
event-2 8 1591695864475

flink-state

從以上動圖中的日誌可以看出,flink每隔一秒都會在做checkpoint。

15:59:32,989 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering checkpoint 102 @ 1592035172989 for job 0c3d201188fc9953cb65498adb4954f4.
15:59:32,997 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Completed checkpoint 102 for job 0c3d201188fc9953cb65498adb4954f4 (21340 bytes in 7 ms).
15:59:33,990 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering checkpoint 103 @ 1592035173989 for job 0c3d201188fc9953cb65498adb4954f4.
15:59:34,001 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Completed checkpoint 103 for job 0c3d201188fc9953cb65498adb4954f4 (21340 bytes in 11 ms).
15:59:34,989 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Triggering checkpoint 104 @ 1592035174989 for job 0c3d201188fc9953cb65498adb4954f4.
15:59:35,006 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator     - Completed checkpoint 104 for job 0c3d201188fc9953cb65498adb4954f4 (21340 bytes in 15 ms).

查看checkpoint 的目錄,發現有checkpoint生成。

ls /tmp/checkpoints/event-counter

這裏簡單說明一下checkpoint目錄,程序每次啓動都會在指定的目錄下(如/tmp/checkpoints/event-counter)根據id生成一個目錄,該目錄會包含三個目錄chk-*、shared、taskowned,每秒做的狀態會報存在chk-*目錄下,整體目錄結構如下所示:

/tmp/checkpoints
└── event-counter
    └── 0c3d201188fc9953cb65498adb4954f4
        ├── chk-104
        │   ├── 01f2561f-ca48-4699-bbea-40fc849b2b0f
        │   ├── 021a7b75-f034-4da3-ad0c-e9801a8f1141
        │   ├── 17fcf354-c212-43ec-8e7c-99e37a7653c9
        │   ├── 33af50a1-e2cb-4364-a723-4c182c5fdb47
        │   ├── 3fa88dc7-ea81-4735-83ba-3d4630b7b8ac
        │   ├── 792068d4-2f89-4d21-aa27-88ef61c7fa99
        │   ├── 793d349b-8029-4cb6-b522-22445ec19bae
        │   ├── _metadata
        │   ├── acd28b9b-a0cb-4880-9564-9b9fe3c29200
        │   ├── c7cbb990-917a-400d-9838-1ac28c92ea10
        │   ├── e202ca66-5f9e-4858-bf15-02ca17a4e2b1
        │   ├── e7370373-c4be-4c7c-b6df-d959127b31a3
        │   └── eb619830-b102-4449-a29c-59d82b6bfbfe
        ├── shared
        └── taskowned

重啓程序之後再發送後三條數據

event-1 10 1591695864476
event-2 50 1591695864477
event-1 6 1591695864478

flink-state-1

按照預期,當我們發送event-1 10 1591695864476這條數據時,我們得到的結果應該是EventCounter(event-1,11.5,3),但實際上得到的是EventCounter(event-1,10.0,1),很明顯之前的狀態丟失了,原因在文章開頭已經說過,這是由於flink並不會自動加載之前的狀態,需要我們手動指定checkpoint,如果使用命令行提交任務的話,可以使用-s參數指定savepoint的目錄,那麼如果在IDEA裏開發測試時如何指定呢?下一章會介紹通過魔改源碼的方式,實現checkpoint的加載。

4 魔改LocalStreamEnvironment

4.1 實現思路

首先講一下思路,當執行env.execute(“EventCounterJob”)時,程序會根據不同的執行環境選擇不同的StreamExecutionEnvironment,flink裏有兩種執行環境:LocalStreamEnvironment和RemoteStreamEnvironment,當我們在IDEA直接運行時,使用的是LocalStreamEnvironment。通過查看RemoteStreamEnvironment的源碼可以發現,它最終在構造JobGraph的時候,會將SavepointRestoreSettings的配置通過JobGraph的setSavepointRestoreSettings方法傳入到JobGraph中。而在LocalStreamEnvironment中構造的JobGraph沒有傳入SavepointRestoreSettings的配置,這裏我們需要通過修改源碼,給JobGraph添加SavepointRestoreSettings配置。

RemoteStreamEnvironment的源碼位置:org.apache.flink.streaming.api.environment.RemoteStreamEnvironment。LocalStreamEnvironment的源碼位置:org.apache.flink.streaming.api.environment.LocalStreamEnvironment,它的execute()實現源碼如下:

	public JobExecutionResult execute(String jobName) throws Exception {
		// transform the streaming program into a JobGraph
		StreamGraph streamGraph = getStreamGraph();
		streamGraph.setJobName(jobName);

		JobGraph jobGraph = streamGraph.getJobGraph();
		jobGraph.setAllowQueuedScheduling(true);

		Configuration configuration = new Configuration();
		configuration.addAll(jobGraph.getJobConfiguration());
		configuration.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "0");

		// add (and override) the settings with what the user defined
		configuration.addAll(this.configuration);

		if (!configuration.contains(RestOptions.BIND_PORT)) {
			configuration.setString(RestOptions.BIND_PORT, "0");
		}

		int numSlotsPerTaskManager = configuration.getInteger(TaskManagerOptions.NUM_TASK_SLOTS, jobGraph.getMaximumParallelism());

		MiniClusterConfiguration cfg = new MiniClusterConfiguration.Builder()
			.setConfiguration(configuration)
			.setNumSlotsPerTaskManager(numSlotsPerTaskManager)
			.build();

		if (LOG.isInfoEnabled()) {
			LOG.info("Running job on local embedded Flink mini cluster");
		}

		MiniCluster miniCluster = new MiniCluster(cfg);

		try {
			miniCluster.start();
			configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());

			return miniCluster.executeJobBlocking(jobGraph);
		}
		finally {
			transformations.clear();
			miniCluster.close();
		}
	}

這段代碼的大體邏輯是這樣的:

  1. 獲取StreamGraph
  2. 從StreamGraph中獲取JobGraph
  3. 構造配置
  4. 創建一個MiniCluster
  5. 將生成的JobGraph提交給MiniCluster

我們可以在提交JobGraph給MiniCluster之前,將SavepointRestoreSettings動態設置給JobGraph,從而實現加載指定savepoint的目的。

4.2 重寫LocalStreamEnvironment

  1. 在java資源下創建一個名爲org.apache.flink.streaming.api.environment包路徑
  2. 在org.apache.flink.streaming.api.environment包下創建一個名爲LocalStreamEnvironment的類
  3. LocalStreamEnvironment類內容如下所示:
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.flink.streaming.api.environment;

import org.apache.flink.annotation.Public;
import org.apache.flink.api.common.InvalidProgramException;
import org.apache.flink.api.common.JobExecutionResult;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.RestOptions;
import org.apache.flink.configuration.TaskManagerOptions;
import org.apache.flink.runtime.jobgraph.JobGraph;
import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings;
import org.apache.flink.runtime.minicluster.MiniCluster;
import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
import org.apache.flink.streaming.api.graph.StreamGraph;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.Nonnull;
import java.util.Map;

/**
 * The LocalStreamEnvironment is a StreamExecutionEnvironment that runs the program locally,
 * multi-threaded, in the JVM where the environment is instantiated. It spawns an embedded
 * Flink cluster in the background and executes the program on that cluster.
 *
 * <p>When this environment is instantiated, it uses a default parallelism of {@code 1}. The default
 * parallelism can be set via {@link #setParallelism(int)}.
 */
@Public
public class LocalStreamEnvironment extends StreamExecutionEnvironment {

    private static final Logger LOG = LoggerFactory.getLogger(LocalStreamEnvironment.class);

    private final Configuration configuration;

    private static final String LAST_CHECKPOINT = "last-checkpoint";

    /**
     * Creates a new mini cluster stream environment that uses the default configuration.
     */
    public LocalStreamEnvironment() {
        this(new Configuration());
    }

    /**
     * Creates a new mini cluster stream environment that configures its local executor with the given configuration.
     *
     * @param configuration The configuration used to configure the local executor.
     */
    public LocalStreamEnvironment(@Nonnull Configuration configuration) {
        if (!ExecutionEnvironment.areExplicitEnvironmentsAllowed()) {
            throw new InvalidProgramException(
                    "The LocalStreamEnvironment cannot be used when submitting a program through a client, " +
                            "or running in a TestEnvironment context.");
        }
        this.configuration = configuration;
        setParallelism(1);
    }

    protected Configuration getConfiguration() {
        return configuration;
    }

    /**
     * Executes the JobGraph of the on a mini cluster of CLusterUtil with a user
     * specified name.
     *
     * @param jobName name of the job
     * @return The result of the job execution, containing elapsed time and accumulators.
     */
    @Override
    public JobExecutionResult execute(String jobName) throws Exception {
        // transform the streaming program into a JobGraph
        StreamGraph streamGraph = getStreamGraph();
        streamGraph.setJobName(jobName);

        JobGraph jobGraph = streamGraph.getJobGraph();
        jobGraph.setAllowQueuedScheduling(true);

        // ##############################################################################
        // 獲取全局Job參數
        Map<String, String> parameters = this.getConfig().getGlobalJobParameters().toMap();
        if (parameters.containsKey(LAST_CHECKPOINT)) {
            // 加載checkpoint
            String checkpointPath = parameters.get(LAST_CHECKPOINT);
            jobGraph.setSavepointRestoreSettings(SavepointRestoreSettings.forPath(checkpointPath));
            LOG.info("Load savepoint from {}.", checkpointPath);
        }
        // ##############################################################################

        Configuration configuration = new Configuration();
        configuration.addAll(jobGraph.getJobConfiguration());
        configuration.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "0");

        // add (and override) the settings with what the user defined
        configuration.addAll(this.configuration);

        if (!configuration.contains(RestOptions.BIND_PORT)) {
            configuration.setString(RestOptions.BIND_PORT, "0");
        }

        int numSlotsPerTaskManager = configuration.getInteger(TaskManagerOptions.NUM_TASK_SLOTS, jobGraph.getMaximumParallelism());

        MiniClusterConfiguration cfg = new MiniClusterConfiguration.Builder()
                .setConfiguration(configuration)
                .setNumSlotsPerTaskManager(numSlotsPerTaskManager)
                .build();

        if (LOG.isInfoEnabled()) {
            LOG.info("Running job on local embedded Flink mini cluster");
        }

        MiniCluster miniCluster = new MiniCluster(cfg);

        try {
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());

            return miniCluster.executeJobBlocking(jobGraph);
        } finally {
            transformations.clear();
            miniCluster.close();
        }
    }
}

上面魔改的代碼部分思路是:從Job的全局參數中拿到最後一個checkpoint的路徑,這個路徑是我們傳入進來的。然後通過jobGraph.setSavepointRestoreSettings(SavepointRestoreSettings.forPath(checkpointPath));設置到JobGraph中。

4.3 修改主程序

最後,需要修改主程序,讓其自動獲取最後一個checkpoint路徑,然後傳入給Job全局參數,添加代碼如下:

    var params: ParameterTool = ParameterTool.fromArgs(args)
    val checkPointDirPath = params.get("checkpoint-dir")
    // 獲取最後一個checkpoint文件夾
    val checkpointDirs = new io.Directory(new File(checkPointDirPath)).list
    if (checkpointDirs.nonEmpty) {
      val lastCheckpointDir = checkpointDirs.maxBy(_.lastModified)
      val checkpoints = new Directory(lastCheckpointDir.jfile).list.filter(_.name.startsWith("chk-"))
      if (checkpoints.nonEmpty) {
        val lastCheckpoint = checkpoints.maxBy(_.lastModified).path
        val newArgs = Array("--last-checkpoint", "file://" + lastCheckpoint)
        // 重新載入配置
        params = ParameterTool.fromArgs(args ++ newArgs)
      }
    }
    env.getConfig.setGlobalJobParameters(params)

		// ################################省略代碼……

    // 設置狀態後端:MemoryStateBackend、FsStateBackend、RocksDBStateBackend,這裏設置基於文件的狀態後端
    env.setStateBackend(new FsStateBackend("file://"+checkPointDirPath))

4.4 啓動程序測試狀態持久化

  1. 測試之前,先清除已有checkpoint

    rm -rf /tmp/checkpoints/event-counter
    
  2. 命令行執行nc -lk 9000

  3. 啓動程序,指定參數–checkpoint-dir /tmp/checkpoints/event-counter

    image-20200614123345380

  4. 先發送三條數據

    event-1 1 1591695864473
    event-1 12 1591695864474
    event-2 8 1591695864475
    

    flink-state-debug-1

  5. 重啓應用

  6. 再發送三條數據

    event-1 1 1591695864473
    event-1 12 1591695864474
    event-2 8 1591695864475
    

    flink-state-debug-2

5 總結

經過魔改後的LocalStreamEnvironment,能夠在程序啓動時,自動的從指定的checkpoint目錄獲取最近一次的提交任務的最新的checkpoint,然後指定給JobGraph,使我們的程序能夠加載到之前的狀態。這種方式只是爲了在本地驗證狀態的可用性,方便我們對狀態進行調試,有這種需求的同學,不妨試一下,另外有更好的方法,可以一起交流。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章