Flink SQL FileSystem Connector 分區提交與自定義小文件合併策略

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink SQL 的 FileSystem Connector 爲了與 Flink-Hive 集成的大環境適配,做了很多改進,而其中最爲明顯的就是分區提交(partition commit)機制。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文先通過源碼簡單過一下分區提交機制的兩個要素——即觸發(trigger)和策略(policy)的實現,然後用合併小文件的實例說一下自定義分區提交策略的方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"PartitionCommitTrigger"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"path\n└── datetime=2019-08-25\n └── hour=11\n ├── part-0.parquet\n ├── part-1.parquet\n └── hour=12\n ├── part-0.parquet\n└── datetime=2019-08-26\n └── hour=6\n ├── part-0.parquet"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼,已經寫入的分區數據何時才能對下游可見呢?這就涉及到如何觸發分區提交的問題。根據官方文檔,觸發參數有以下兩個:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"sink.partition-commit.trigger"},{"type":"text","text":":可選 process-time(根據處理時間觸發)和 partition-time(根據從事件時間中提取的分區時間觸發)。"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"sink.partition-commit.delay"},{"type":"text","text":":分區提交的時延。如果 trigger 是 process-time,則以分區創建時的系統時間戳爲準,經過此時延後提交;如果 trigger 是 partition-time,則以分區創建時本身攜帶的事件時間戳爲準,當水印時間戳經過此時延後提交。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可見,process-time trigger 無法應對處理過程中出現的抖動,一旦數據遲到或者程序失敗重啓,數據就不能按照事件時間被歸入正確的分區了。所以在實際應用中,我們幾乎總是選用 partition-time trigger,並自己生成水印。當然我們也需要通過 partition.time-extractor.*一系列參數來指定抽取分區時間的規則(PartitionTimeExtractor),官方文檔說得很清楚,不再贅述。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在源碼中,PartitionCommitTrigger 的類圖如下。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/e8/e8628dd73fb5cfba75c08cc46016cec4.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面以分區時間觸發的 PartitionTimeCommitTrigger 爲例,簡單看看它的思路。直接上該類的完整代碼。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"public class PartitionTimeCommitTigger implements PartitionCommitTrigger {\n private static final ListStateDescriptor> PENDING_PARTITIONS_STATE_DESC =\n new ListStateDescriptor<>(\n \"pending-partitions\",\n new ListSerializer<>(StringSerializer.INSTANCE));\n\n private static final ListStateDescriptor> WATERMARKS_STATE_DESC =\n new ListStateDescriptor<>(\n \"checkpoint-id-to-watermark\",\n new MapSerializer<>(LongSerializer.INSTANCE, LongSerializer.INSTANCE));\n\n private final ListState> pendingPartitionsState;\n private final Set pendingPartitions;\n\n private final ListState> watermarksState;\n private final TreeMap watermarks;\n private final PartitionTimeExtractor extractor;\n private final long commitDelay;\n private final List partitionKeys;\n\n public PartitionTimeCommitTigger(\n boolean isRestored,\n OperatorStateStore stateStore,\n Configuration conf,\n ClassLoader cl,\n List partitionKeys) throws Exception {\n this.pendingPartitionsState = stateStore.getListState(PENDING_PARTITIONS_STATE_DESC);\n this.pendingPartitions = new HashSet<>();\n if (isRestored) {\n pendingPartitions.addAll(pendingPartitionsState.get().iterator().next());\n }\n\n this.partitionKeys = partitionKeys;\n this.commitDelay = conf.get(SINK_PARTITION_COMMIT_DELAY).toMillis();\n this.extractor = PartitionTimeExtractor.create(\n cl,\n conf.get(PARTITION_TIME_EXTRACTOR_KIND),\n conf.get(PARTITION_TIME_EXTRACTOR_CLASS),\n conf.get(PARTITION_TIME_EXTRACTOR_TIMESTAMP_PATTERN));\n\n this.watermarksState = stateStore.getListState(WATERMARKS_STATE_DESC);\n this.watermarks = new TreeMap<>();\n if (isRestored) {\n watermarks.putAll(watermarksState.get().iterator().next());\n }\n }\n\n @Override\n public void addPartition(String partition) {\n if (!StringUtils.isNullOrWhitespaceOnly(partition)) {\n this.pendingPartitions.add(partition);\n }\n }\n\n @Override\n public List committablePartitions(long checkpointId) {\n if (!watermarks.containsKey(checkpointId)) {\n throw new IllegalArgumentException(String.format(\n \"Checkpoint(%d) has not been snapshot. The watermark information is: %s.\",\n checkpointId, watermarks));\n }\n\n long watermark = watermarks.get(checkpointId);\n watermarks.headMap(checkpointId, true).clear();\n\n List needCommit = new ArrayList<>();\n Iterator iter = pendingPartitions.iterator();\n while (iter.hasNext()) {\n String partition = iter.next();\n LocalDateTime partTime = extractor.extract(\n partitionKeys, extractPartitionValues(new Path(partition)));\n if (watermark > toMills(partTime) + commitDelay) {\n needCommit.add(partition);\n iter.remove();\n }\n }\n return needCommit;\n }\n\n @Override\n public void snapshotState(long checkpointId, long watermark) throws Exception {\n pendingPartitionsState.clear();\n pendingPartitionsState.add(new ArrayList<>(pendingPartitions));\n\n watermarks.put(checkpointId, watermark);\n watermarksState.clear();\n watermarksState.add(new HashMap<>(watermarks));\n }\n\n @Override\n public List endInput() {\n ArrayList partitions = new ArrayList<>(pendingPartitions);\n pendingPartitions.clear();\n return partitions;\n }\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"注意到該類中維護了兩對必要的信息:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"pendingPartitions/pendingPartitionsState"},{"type":"text","text":":等待提交的分區以及對應的狀態;"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"watermarks/watermarksState"},{"type":"text","text":":的映射關係(用 TreeMap 存儲以保證有序)以及對應的狀態。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這也說明開啓檢查點是分區提交機制的前提。snapshotState() 方法用於將這些信息保存到狀態中。這樣在程序 failover 時,也能夠保證分區數據的完整和正確。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼 PartitionTimeCommitTigger 是如何知道該提交哪些分區的呢?來看 committablePartitions() 方法:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"檢查 checkpoint ID 是否合法;"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"取出當前 checkpoint ID 對應的水印,並調用 TreeMap的headMap() 和 clear() 方法刪掉早於當前 checkpoint ID 的水印數據(沒用了);"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"遍歷等待提交的分區,調用之前定義的 PartitionTimeExtractor(比如${year}-${month}-${day} ${hour}:00:00)抽取分區時間。如果水印時間已經超過了分區時間加上上述 sink.partition-commit.delay 參數,說明可以提交,並返回它們。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PartitionCommitTrigger 的邏輯會在負責真正提交分區的 StreamingFileCommitter 組件中用到(注意 StreamingFileCommitter 的並行度固定爲 1,之前有人問過這件事)。StreamingFileCommitter 和 StreamingFileWriter(即 SQL 版 StreamingFileSink)的細節相對比較複雜,本文不表,之後會詳細說明。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"PartitionCommitPolicy"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"metastore"},{"type":"text","text":":向 Hive Metastore 更新分區信息(僅在使用 HiveCatalog 時有效);"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"success-file"},{"type":"text","text":":向分區目錄下寫一個表示成功的文件,文件名可以通過 sink.partition-commit.success-file.name 參數自定義,默認爲_SUCCESS;"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"custom"},{"type":"text","text":":自定義的提交策略,需要通過 sink.partition-commit.policy.class 參數來指定策略的類名。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PartitionCommitPolicy 的內部實現就簡單多了,類圖如下。策略的具體邏輯通過覆寫 commit() 方法實現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d5/d591b80fc4ded00ab0842e2faf510dd7.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"兩個默認實現 MetastoreCommitPolicy 和 SuccessFileCommitPolicy 如下,都非常容易理解。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"public class MetastoreCommitPolicy implements PartitionCommitPolicy {\n private static final Logger LOG = LoggerFactory.getLogger(MetastoreCommitPolicy.class);\n\n private TableMetaStore metaStore;\n\n public void setMetastore(TableMetaStore metaStore) {\n this.metaStore = metaStore;\n }\n\n @Override\n public void commit(Context context) throws Exception {\n LinkedHashMap partitionSpec = context.partitionSpec();\n metaStore.createOrAlterPartition(partitionSpec, context.partitionPath());\n LOG.info(\"Committed partition {} to metastore\", partitionSpec);\n }\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"public class SuccessFileCommitPolicy implements PartitionCommitPolicy {\n private static final Logger LOG = LoggerFactory.getLogger(SuccessFileCommitPolicy.class);\n\n private final String fileName;\n private final FileSystem fileSystem;\n\n public SuccessFileCommitPolicy(String fileName, FileSystem fileSystem) {\n this.fileName = fileName;\n this.fileSystem = fileSystem;\n }\n\n @Override\n public void commit(Context context) throws Exception {\n fileSystem.create(\n new Path(context.partitionPath(), fileName),\n FileSystem.WriteMode.OVERWRITE).close();\n LOG.info(\"Committed partition {} with success file\", context.partitionSpec());\n }\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"Customize PartitionCommitPolicy"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/05/05393014ee52cf8152936540614e0647.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由上圖可見,在寫入比較頻繁或者並行度比較大時,每個分區內都會出現很多細碎的小文件,這是我們不樂意看到的。下面嘗試自定義 PartitionCommitPolicy,實現在分區提交時將它們順便合併在一起(存儲格式爲 Parquet)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f3/f3e2a8a0bc0bc41110cb32b8df909977.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Parquet 格式與普通的TextFile等行存儲格式不同,它是自描述(自帶 schema 和 metadata)的列存儲,數據結構按照 Google Dremel 的標準格式來組織,與 Protobuf 相同。所以,我們應該先檢測寫入文件的 schema,再按照 schema 分別讀取它們,並拼合在一起。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面貼出合併分區內所有小文件的完整策略 ParquetFileMergingCommitPolicy。爲了保證依賴不衝突,Parquet 相關的組件全部採用 Flink shade 過的版本。竊以爲代碼寫得還算工整易懂,所以偷懶不寫註釋了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"package me.lmagics.flinkexp.hiveintegration.util;\n\nimport org.apache.flink.hive.shaded.parquet.example.data.Group;\nimport org.apache.flink.hive.shaded.parquet.hadoop.ParquetFileReader;\nimport org.apache.flink.hive.shaded.parquet.hadoop.ParquetFileWriter.Mode;\nimport org.apache.flink.hive.shaded.parquet.hadoop.ParquetReader;\nimport org.apache.flink.hive.shaded.parquet.hadoop.ParquetWriter;\nimport org.apache.flink.hive.shaded.parquet.hadoop.example.ExampleParquetWriter;\nimport org.apache.flink.hive.shaded.parquet.hadoop.example.GroupReadSupport;\nimport org.apache.flink.hive.shaded.parquet.hadoop.metadata.CompressionCodecName;\nimport org.apache.flink.hive.shaded.parquet.hadoop.metadata.ParquetMetadata;\nimport org.apache.flink.hive.shaded.parquet.hadoop.util.HadoopInputFile;\nimport org.apache.flink.hive.shaded.parquet.schema.MessageType;\nimport org.apache.flink.table.filesystem.PartitionCommitPolicy;\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.FileSystem;\nimport org.apache.hadoop.fs.LocatedFileStatus;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.fs.RemoteIterator;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class ParquetFileMergingCommitPolicy implements PartitionCommitPolicy {\n private static final Logger LOGGER = LoggerFactory.getLogger(ParquetFileMergingCommitPolicy.class);\n\n @Override\n public void commit(Context context) throws Exception {\n Configuration conf = new Configuration();\n FileSystem fs = FileSystem.get(conf);\n String partitionPath = context.partitionPath().getPath();\n\n List files = listAllFiles(fs, new Path(partitionPath), \"part-\");\n LOGGER.info(\"{} files in path {}\", files.size(), partitionPath);\n\n MessageType schema = getParquetSchema(files, conf);\n if (schema == null) {\n return;\n }\n LOGGER.info(\"Fetched parquet schema: {}\", schema.toString());\n\n Path result = merge(partitionPath, schema, files, fs);\n LOGGER.info(\"Files merged into {}\", result.toString());\n }\n\n private List listAllFiles(FileSystem fs, Path dir, String prefix) throws IOException {\n List result = new ArrayList<>();\n\n RemoteIterator dirIterator = fs.listFiles(dir, false);\n while (dirIterator.hasNext()) {\n LocatedFileStatus fileStatus = dirIterator.next();\n Path filePath = fileStatus.getPath();\n if (fileStatus.isFile() && filePath.getName().startsWith(prefix)) {\n result.add(filePath);\n }\n }\n\n return result;\n }\n\n private MessageType getParquetSchema(List files, Configuration conf) throws IOException {\n if (files.size() == 0) {\n return null;\n }\n\n HadoopInputFile inputFile = HadoopInputFile.fromPath(files.get(0), conf);\n ParquetFileReader reader = ParquetFileReader.open(inputFile);\n ParquetMetadata metadata = reader.getFooter();\n MessageType schema = metadata.getFileMetaData().getSchema();\n\n reader.close();\n return schema;\n }\n\n private Path merge(String partitionPath, MessageType schema, List files, FileSystem fs) throws IOException {\n Path mergeDest = new Path(partitionPath + \"/result-\" + System.currentTimeMillis() + \".parquet\");\n ParquetWriter writer = ExampleParquetWriter.builder(mergeDest)\n .withType(schema)\n .withConf(fs.getConf())\n .withWriteMode(Mode.CREATE)\n .withCompressionCodec(CompressionCodecName.SNAPPY)\n .build();\n\n for (Path file : files) {\n ParquetReader reader = ParquetReader.builder(new GroupReadSupport(), file)\n .withConf(fs.getConf())\n .build();\n Group data;\n while((data = reader.read()) != null) {\n writer.write(data);\n }\n reader.close();\n }\n writer.close();\n\n for (Path file : files) {\n fs.delete(file, false);\n }\n\n return mergeDest;\n }\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"別忘了修改分區提交策略相關的參數:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"'sink.partition-commit.policy.kind' = 'metastore,success-file,custom', \n'sink.partition-commit.policy.class' = 'me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy'"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"重新跑一遍之前的 Hive Streaming 程序,觀察日誌輸出:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"20-08-04 22:15:00 INFO me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy - 14 files in path /user/hive/warehouse/hive_tmp.db/analytics_access_log_hive/ts_date=2020-08-04/ts_hour=22/ts_minute=13\n\n// 如果看官熟悉Protobuf的話,可以發現這裏的schema風格是完全一致的\n20-08-04 22:15:00 INFO me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy - Fetched parquet schema: \nmessage hive_schema {\n optional int64 ts;\n optional int64 user_id;\n optional binary event_type (UTF8);\n optional binary from_type (UTF8);\n optional binary column_type (UTF8);\n optional int64 site_id;\n optional int64 groupon_id;\n optional int64 partner_id;\n optional int64 merchandise_id;\n}\n\n20-08-04 22:15:04 INFO me.lmagics.flinkexp.hiveintegration.util.ParquetFileMergingCommitPolicy - Files merged into /user/hive/warehouse/hive_tmp.db/analytics_access_log_hive/ts_date=2020-08-04/ts_hour=22/ts_minute=13/result-1596550500950.parquet\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後來驗證一下,合併成功。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/7b/7bccd5f037bb36e24e41767b81e9d4e5.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上。感興趣的同學也可以動手測試~"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https://www.jianshu.com/p/fb7d29abfa14"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章