ClickHouse最佳實戰之分佈表寫入流程分析

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/af/af9f6637b50b09be60b00a42f3812d5e.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雲妹導讀:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前不久,京東智聯雲正式上線了基於Clickhouse的分析型雲數據庫JCHDB,一經推出便受到廣大用戶的極大關注。有興趣的小夥伴可以回顧上一篇文章"},{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s?__biz=MzU1OTgxMTg2Nw==&mid=2247494299&idx=1&sn=3bb704ece56a19c4c4a335fdfbcd33b3&scene=21#wechat_redirect","title":null},"content":[{"type":"text","text":"《比MySQL快839倍!揭開分析型數據庫JCHDB的神祕面紗》"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse像ElasticSearch一樣具有數據分片(shard)的概念,這也是分佈式存儲的特點之一,即通過並行讀寫提高效率。ClickHouse依靠Distributed引擎實現了Distributed(分佈式)表機制,在所有分片(本地表)上建立視圖進行分佈式查詢,使用很方便。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d2/d284f81bf66f5e5995fda6d216af0294.webp","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Distributed表引擎是"},{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"一種特殊的表引擎,自身不會存儲任何數據,而是通過讀取或寫入其他遠端節點上的表進行數據處理的表引擎。"},{"type":"text","text":"該表引擎需要依賴各個節點的本地表來創建,本地表的存在是Distributed表創建的依賴條件,創建語句如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"CREATE TABLE {teble} ON CLUSTER {cluster}\nAS {local_table}\nENGINE= Distributed({cluster}, {database}, {local_table},{policy})\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏的policy一般可以使用隨機(例如rand())或哈希(例如halfMD5hash(id))。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"再來看下ClickHouse集羣節點配置文件,相關參數如下:"}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"\n \n \n 1\n true\n \n 1\n example01-01-1\n 9000\n \n \n example01-01-2\n 9000\n \n \n \n 2\n true\n \n example01-02-1\n 9000\n \n \n example01-02-2\n 9000\n \n \n \n"}]},{"type":"heading","attrs":{"align":null,"level":3}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fb/fb2cb9e4ef39402d3f825eef487d3213.webp","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":3}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有了上面的基礎瞭解,就將進入主題了,本文主要是對Distributed表如何寫入及如何分發做一下分析,略過SQL的詞法解析、語法解析等步驟,從寫入流開始,其構造方法如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"DistributedBlockOutputStream(const Context & context_, StorageDistributed &\nstorage_, const ASTPtr & query_ast_, const ClusterPtr & cluster_, bool\ninsert_sync_, UInt64 insert_timeout_);"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果insert_sync_爲true,表示是同步寫入,並配合insert_timeout_參數使用(insert_timeout_爲零表示沒有超時時間);如果insert_sync_爲false,表示寫入是異步。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1,同步寫入還是異步寫入"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同步寫入是指數據直寫入實際的表中,而異步寫入是指數據首先被寫入本地文件系統,然後發送到遠端節點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const Context &\ncontext)\n{\n ......\n\n /// Force sync insertion if it is remote() table function\n bool insert_sync = settings.insert_distributed_sync || owned_cluster;\n auto timeout = settings.insert_distributed_timeout;\n /// DistributedBlockOutputStream will not own cluster, but will own \nConnectionPools of the cluster\n return std::make_shared(\n context, *this, createInsertToRemoteTableQuery(remote_database,\nremote_table, getSampleBlockNonMaterialized()), cluster,\n nsert_sync, timeout);\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"是否執行同步寫入是由insert_sync決定的,最終是由是否配置insert_distributed_sync(默認爲false)和owned_cluster值的或關係決定的,一般在使用MergeTree之類的普通表引擎時,通常是異步寫入,但在使用表函數時(使用owned_cluster來判斷是否是表函數),通常會使用同步寫入。這也是在設計業務邏輯時需要注意的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"owned_cluster是什麼時候賦值的呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"StoragePtr TableFunctionRemoteexecuteImpl(const ASTPtr & astfunction, const Context & \ncontext, const stdstring & tablename) const\n{ \n ......\n StoragePtr res = remotetablefunction_ptr\n ? StorageDistributed::createWithOwnCluster(\n table_name,\n structureremotetable,\n remotetablefunction_ptr,\n cluster,\n context)\n : StorageDistributed::createWithOwnCluster(\n table_name,\n structureremotetable,\n remote_database,\n remote_table,\n cluster,\n context);\n ......\n} \nStoragePtr StorageDistributed::createWithOwnCluster(\n const std::string & tablename, \n const ColumnsDescription & columns_,\n ASTPtr & remotetablefunctionptr, \n ClusterPtr & ownedcluster, \n const Context & context_)\n{ \n auto res = create(String{}, tablename, columns, ConstraintsDescription{}, \nremotetablefunctionptr, String{}, context, ASTPtr(), String(), false);\n res->ownedcluster = ownedcluster_;\n return res;\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以發現在創建remote表時會根據remote_table_function_ptr參數對最終的owned_cluster_賦值爲true。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2,異步寫入是如何實現的"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"瞭解了什麼時候使用同步寫入什麼時候異步寫入後,再繼續分析正式的寫入過程,同步寫入一般場景中涉及較少,這裏主要對異步寫入邏輯進行分析。outStream的write方法主邏輯如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"DistributedBlockOutputStream::write()\n ↓\n if insert_sync\n | |\n true false\n ↓ ↓\n writeSync() writeAsync() \n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其實這個write方法是重寫了virtual void IBlockOutputStream::write(const Block & block),所以節點在接收到流並調用流的write方法就會進入該邏輯中。並且根據insert_sync來決定走同步寫還是異步寫。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"3,寫入本地節點還是遠端節點"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"主要還是對異步寫入進行分析,其實writeAsync()最終的實現方法是writeAsyncImpl(),大致邏輯圖如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" writeAsyncImpl()\n ↓\n if shard_info.hasInternalReplication()\n | |\n true false\n ↓ ↓\nwriteToLocal() writeToLocal()\n ↓ ↓\nwriteToShard() for(every shard){writeToShard()}\n ↓ ↓ \n end end\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中getShardsInfo()方法就是獲取config.xml配置文件中獲取集羣節點信息,hasInternalReplication()就對應着配置文件中的internal_replication參數,如果爲true,就會進入最外層的if邏輯,否則就會進入else邏輯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中writeToLocal()方法是相同的,是指如果shard包含本地節點,優先選擇本地節點進行寫入;後半部分writeToShard()就是根據internal_replication參數的取值來決定是寫入其中一個遠端節點,還是所有遠端節點都寫一次。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"4,數據如何寫入本地節點"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當然一般情況Distributed表還是基於ReplicatedMergeTree系列表進行創建,而不是基於表函數的,所以大多數場景還是會先寫入本地再分發到遠端節點。那寫入Distributed表的數據是如何保證原子性落盤而不會在數據正在寫入的過程中就把不完整的數據發送給遠端其他節點呢?看下writeToShard()方法大致邏輯,如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" writeToShard()\n ↓\nfor(every dir_names){\n |\n └──if first iteration\n | |\n false true\n ↓ ↓ \n | ├──storage.requireDirectoryMonitor()\n | ├──CompressedWriteBuffer\n | ├──writeStringBinary()\n | ├──stream.writePrefix()\n | ├──stream.write(block)\n | ├──stream.writeSuffix()\n ↘ ↙ \n link(tmp_file, file) \n └──} \n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"繼續具體再看下源碼的具體實現,如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"void DistributedBlockOutputStream::writeToShard(const Block & block, const\nstd::vector<:string> & dir_names) \n{\n /** tmp directory is used to ensure atomicity of transactions\n * and keep monitor thread out from reading incomplete data\n */\n std::string first_file_tmp_path{};\n\n auto first = true;\n\n /// write first file, hardlink the others\n for (const auto & dir_name : dir_names)\n {\n const auto & path = storage.getPath() + dir_name + '/';\n\n /// ensure shard subdirectory creation and notify storage\n if (Poco::File(path).createDirectory())\n storage.requireDirectoryMonitor(dir_name);\n\n const auto & file_name = toString(storage.file_names_increment.get()) +\n\".bin\";\n const auto & block_file_path = path + file_name;\n\n /** on first iteration write block to a temporary directory for \nsubsequent hardlinking to ensure\n * the inode is not freed until we're done */\n if (first)\n {\n first = false;\n\n const auto & tmp_path = path + \"tmp/\";\n Poco::File(tmp_path).createDirectory();\n const auto & block_file_tmp_path = tmp_path + file_name;\n\n first_file_tmp_path = block_file_tmp_path;\n\n WriteBufferFromFile out{block_file_tmp_path};\n CompressedWriteBuffer compress{out};\n NativeBlockOutputStream stream{compress, ClickHouseRevision::get(),\nblock.cloneEmpty()};\n\n writeVarUInt(UInt64(DBMS_DISTRIBUTED_SENDS_MAGIC_NUMBER), out);\n context.getSettingsRef().serialize(out);\n writeStringBinary(query_string, out);\n\n stream.writePrefix();\n stream.write(block);\n stream.writeSuffix();\n }\n\n if (link(first_file_tmp_path.data(), block_file_path.data()))\n throwFromErrnoWithPath(\"Could not link \" + block_file_path + \" to \"\n+ first_file_tmp_path, block_file_path,\n ErrorCodes::CANNOT_LINK);\n }\n ......\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先來了解下Distributed表在目錄中的存儲方式,默認位置都是/var/lib/clickhouse/data/{database}/{table}/在該目錄下會爲每個shard生成不同的目錄,其中存放需要發送給該shard的數據文件,例如:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"[root@ck test]# tree\n.\n├── 'default@ck2-0:9000,default@ck2-1:9000'\n│ ├── 25.bin\n│ └── tmp\n│ └── 26.bin\n└── 'default@ck3-0:9000,default@ck3-1:9000'\n└── tmp \n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以發現每個shard對應的目錄名是{darabse}@{hostname}:{tcpPort}的格式,如果多個副本會用,分隔。並且每個shard目錄中還有個tmp目錄,這個目錄的設計在writeToShard()方法中做了解釋,是爲了避免數據文件在沒寫完就被髮送到遠端。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據文件在本地寫入的過程中會先寫入tmp路徑中,寫完後通過硬鏈接link到shard目錄,保證只要在shard目錄中出現的數據文件都是完整寫入的數據文件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數據文件的命名是通過全局遞增的數字加.bin命名,是爲了在後續分發到遠端節點保持順序性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"5,數據如何分發到各個節點"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"細心的你可能已經發現在writeToShard()方法中有個requireDirectoryMonitor(),這個方法就是將shard目錄註冊監聽,並通過專用類StorageDistributedDirectoryMonitor來實現數據文件的分發,根據不同配置可以實現逐一分發或批量分發。並且包含對壞文件的容錯處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b8/b8c1b4ef181102a1a2759e1fd3ddd674.jpeg","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分析到這,可能還有人會覺得雲裏霧裏,覺得整個流程串不起來,其實這樣寫是爲了先不影響Distributed表寫入的主流程,明白了這個再附加上sharding_key拆分和權重拆分就很好理解了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ff/ff238d54a3bed271638a86a75316ccf0.jpeg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面提到過writeAsync()的最終實現方法是writeAsyncImpl,這個說法是沒問題的,但是中間還有段關鍵邏輯,如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" writeAsync()\n ↓\nif storage.getShardingKeyExpr() && (cluster->getShardsInfo().size() > 1\n | |\n true false\n ↓ ↓\n writeAsyncImpl(block) writeSplitAsync(block)\n ↓\n splitBlock(block)\n ↓\n writeAsyncImpl(splitted_blocks,shard_idx)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"getShardingKeyExpr()方法就是去獲取sharding_key生成的表達式指針,該表達式是在創建表時就生成的,如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"sharding_key_expr = buildShardingKeyExpression(sharding_key_, global_context,\ngetColumns().getAllPhysical(), false);\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那sharding_key和sharding_key_expr是什麼關係呢?如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"const ExpressionActionsPtr & getShardingKeyExpr() const { return \nsharding_key_expr; }\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以說sharding_key_expr最終主要就是由sharding_key決定的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一般情況下getShardingKeyExpr()方法都爲true,如果再滿足shard數量大於1,就會對block進行拆分,由splitBlock()方法主要邏輯就是創建selector並使用selector進行切割,大致邏輯如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" splitBlock()\n ↓\n createSelector(block)\n ↓\nfor(every shard){column->scatter(num_shards, selector);}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於如何創建selector以及selector中都做了什麼事兒,來具體看下源碼截取,如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"IColumn::Selector DistributedBlockOutputStream::createSelector(const Block &\nsource_block)\n{\n Block current_block_with_sharding_key_expr = source_block;\n storage.getShardingKeyExpr()- \n>execute(current_block_with_sharding_key_expr);\n\n const auto & key_column =\ncurrent_block_with_sharding_key_expr.getByName(storage.getShardingKeyColumnName\n());\n const auto & slot_to_shard = cluster->getSlotToShard();\n ......\n throw Exception{\"Sharding key expression does not evaluate to an integer \ntype\", ErrorCodes::TYPE_MISMATCH};\n}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/55/55209b32f7356455e0d12aab5a70cc80.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"看splitBlock()方法,ClickHouse是利用createSelector()方法構造selector來進行後續的處理。在createSelector()方法中最重要的就是key_column和slot_to_shard。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"key_column是通過sharding_key間接獲得的,是爲了根據主鍵列進行切割;slot_to_shard是shard插槽,這裏就是爲了處理權重,在後續向插槽中插入數據時就會結合config.xml中的weight進行按比例處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"細節比較複雜這裏不做太細緻的分析,有興趣可以自行看下(如template IColumn::Selector createBlockSelector())。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"到此,對於Distributed表的寫入流程的關鍵點就大致分析完了。篇幅有限有些細節沒有做過多說明,有興趣的可以自行再瞭解下。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/2a/2a81e637daad92bfb11590923632fcca.png","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過對Distributed表寫入流程的分析,瞭解了該類型表的實際工作原理,所以在實際應用中有幾個點還需要關注一下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"Distributed表在寫入時會在本地節點生成臨時數據,會產生寫放大,所以會對CPU及內存造成一些額外消耗,建議儘量少使用Distributed表進行寫操作;"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Distributed表寫的臨時block會把原始block根據sharding_key和weight進行再次拆分,會產生更多的block分發到遠端節點,也增加了merge的負擔;"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Distributed表如果是基於表函數創建的,一般是同步寫,需要注意。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"瞭解原理才能更好的使用,遇到問題才能更好的優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"點擊"},{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"【"},{"type":"link","attrs":{"href":"https://www.jdcloud.com/cn/products/jcs-for-jchdb?utm_source=PMM_infoq&utm_medium=NAutm_campaign=ReadMoreutm_term=NA","title":""},"content":[{"type":"text","text":"閱讀原文"}]},{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"】"},{"type":"text","text":"即可前往京東智聯雲控制檯開通試用JCHDB。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章