分布式集群如何实现高效的数据分布

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"一、前言"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着互联网的发展,用户产生的数据越来越多,企业面临着庞大数据的存储问题,目前市面上主流的分布式大数据文件系统,都是对数据切片打散,通过离散方法将数据散列在集群的所有节点上,本文将带你了解DHT(Distributed Hash Table):分布式哈希表是如何实现数据的分布式离散存储的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DHT(Distributed Hash Table):分布式哈希表"}]}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"二、技术背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"互联网发展早期,数据通常存储在单个服务器上,初期数据增长较为缓慢,可以通过提升单机的存储能力满足数据的增长需求;随着互联网普及程度的推进,用户数、用户产生和访问的数据呈指数增长;单机已无法存储用户需要的数据。为此,迫切需要多个服务器共同协作,存储量级更大的数据。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"三、传统 Hash"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"传统 Hash 通过算法 hash()=X mod S 来对数据进行分散,在元数据比较分散的情况下,数据能够很好的散列在集群节点中。由于S代表了集群的节点数,当进行集群的扩容缩容时,S的变化会影响到历史数据的命中问题,因此为了提高数据命中率,会产生大量测数据迁移,性能较差。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"四、一个简单的 DHT"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分布式 Hash 通过构造一个长度为2的32次方(ipv4地址的数量)的环,将节点散列在 Hash环上,根据不同的 Hash算法计算出来的 Hash值有所不同,本文采用FNV Hash算法来计算 Hash值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/22/22eadf4ad6b3cab9e2c2adf30442302d.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如图所示,先将存储节点通过 Hash计算后添加到 DHT 环上,每个节点距离上一个节点间的这段区间,作为该节点的数据分区,Hash值落在这个分区的数据将存储到这个节点上;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然后将数据通过 Hash算法散列到DHT环上,数据落到DHT环上后,按照顺时针方向找到离自己最近的节点作为数据存储节点,如下所示,数据 ObjectA 落到节点 NodeA上,数据 ObjectB 落到节点 NodeB 上;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/54/54315ea3294290867a694e3f9076f403.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"初始化DHT的源码如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/72/72a18615fbfbd5a16be7d2e32cedd236.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先定义了一个存放集群节点元数据的Map,用来存储接入到DHT环中的物理节点数据。然后定义了一个DHT环——vNodes,用来存储DHT环中的节点位置信息。这样我们就实现了一个简单的DHT环,通过addPhysicalNode方法可以模拟集群节点的加入。加入时会计算节点的Hash值并存放到vNodes中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b5/b539b771da0052a6294dd62d390145d9.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"初始化4个存储节点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/3b/3be96ccae6787905080e153b0520e7fc.webp","alt":null,"title":"","style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过countNodeValue方法插入100条数据,在写数据的过程中,根据数据的 Hash值找到DHT环上最近的一个节点,然后将数据写入该节点中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/65/65814b545314e796a64a0b10018b796d.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"插入100条数据后,各个节点的数据分布如下,可以看见4个节点的数据并不均匀,只有一个节点分配到了数据(这与写入的数据也有一定关系)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ad/ad644893939362fabe57de2cadf853f0.png","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"插入100万条数据后,各个节点的数据分布如下,虽然每个节点都分配到了数据,但仍然出现了较大的数据倾斜。这将导致99%的请求都会由Node3进行处理,出现一核有难三核围观的情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/85/851a1db4af46307c6119eccfc6d3e66e.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"出现如上问题是什么原因呢?通过查看DHT环上各节点的hash值不难看出,各节点间距不均匀,插入的数据按顺时针查找节点时都找到了Node3,因此数据都写到了Node3里面,所以节点区间不均匀会使某些节点能覆盖更多的数据,导致数据不均衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c4/c4bede8de90c2584f5739285f19bb91f.webp","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/03/03f216a1117819d5daad73143cf4f1d0.webp","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"说了一个简单DHT环的基本原理后,再来思考一个问题:简单的DHT环数据离散,但仍然存在数据倾斜的情况,还不如用传统的hash方式分配数据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面提到传统的hash方式在当时在节点故障后,整个集群的数据会进行大量的迁移,影响集群性能,那么DHT能解决这一问题吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们还是用之前分配好的100万条数据,模拟节点4故障,如下图所示,Node4上的数据只迁移到了Node1,对Node2和Node3不产生数据迁移,从而降低了节点故障导致每个节点都需要进行数据迁移带来的影响。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/0b/0bcaec3e05fd632ec2b976e40433db7b.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"五、DHT 的改进"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1、虚拟节点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大家思考一下,解决数据倾斜的问题可以如何解决?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过增加集群节点的方式最简单直接,目的是将更多的节点散列到DHT环上,使得环上所有节点分布更加均匀,节点间的区间间隔尽可能的均衡,以下是10个节点和20个节点集群的数据分布情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/69/692415cd97fc1161882b937126cefa57.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/be/bed9442323b674633b38c39d54622886.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以发现,通过增加节点的方式,仍然无法从根本上解决数据倾斜的问题。并且增加节点会提高集群的设备成本和维护成本。同时,这种方案还引出了一个严重的问题,如果Node20故障了,那么Node20的数据会全数迁移到下一个节点上,最终导致集群出现数据倾斜,数据较多的节点还将处理更多的IO请求,容易形成数据热点,成为性能瓶颈,引发集群整体的性能下降。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/68/68b3ed8f0236b87ef605b103630586f6.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"(2) 引入虚拟节点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了解决数据倾斜的问题,引入了虚拟节点的概念。虚拟节点也就是真实节点的一个逻辑副本,如图所示,对节点NodeA进行3次虚拟节点hash分布,形成了虚拟节点NodeA1、NodeA2、NodeA3。当NodeA故障后,指向NodeA的数据会指向NodeB、NodeC。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fd/fda444558ba631d527d6d613f983d9d9.png","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c1/c1304192cfedd3a47a8ff8d37544420d.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当引入虚拟节点数量为100时,数据已经分散在各个节点上了,如果虚拟节点足够多,最终将达到数据均衡的状态。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/40/40e24fd48273dab00b9f9f581eb05dbc.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"虚拟节点数据1万时的数据分布:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/2d/2d34be27e60657dea1e9f33158b690a2.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"虚拟节点数量为100万时的数据分布:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6f/6f054689db840574285e9598fcbffbc1.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当Node3故障后,Node3上的数据被均匀的分散到其他节点上,不会出现数据倾斜的情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fe/fe62db3714091ec90bf48c2374d87393.webp","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2、负载边界因子"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样就完美了吗?我们初始化一个4个节点的DHT环,虚拟节点设置为100,然后插入100条数据,并打印DHT环的元数据信息如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/62/62be9a58159a4eb74bb3ff5eed5e3b1d.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以发现,虽然设置的虚拟节点,但是仍然无法均衡的将节点散列到DHT环上,导致Node2过载,Node空闲。我们再思考一种极端场景,当我们的数据恰好计算hash值后都在区间A,而这个区间只有NodeA,那么仍然出现了数据倾斜。如何解决这个问题呢,这里我们引入一个叫负载边界因子的概念。DHT环部署了4个节点,总共有100条数据要插入,那么平均下来每个节点的权重就是100/4+1=26,当数据映射过程中达到了该节点的权重,则映射到下一个节点,下面是代码实现。"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/79/79e7abdce50ae0d672bd024c385aa023.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/4d/4d69dcb9cfd51810a91adbd711460848.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打开负载边界因子开关的情况:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/71/7183bbb150deea76db931b8f5a3ae2fa.webp","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"boxShadow"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在打开负载边界因子开关后,数据得到了较好的均衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"六、DHT 引发的思考"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述的只是一个简单的DHT,数据也做了简化,数据的存储和读取都需要查询DHT环,如何提升DHT的读写性能?如何提升DHT的高可靠?当节点故障后,如何将故障节点的数据迁移到新的节点?如何做好数据备份?如何保证副本数据不集中在一个节点上?也是需要去思考的,本文只是抛砖引玉简单的介绍了DHT基本的思想,更多生产环境面临的挑战,在此不做展开。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以看到,DHT提供了一种负载均衡的思路。利用hash算法的特性,将数据或业务请求分散到集群中的各个节点上,提高系统容错性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"vivo  用户运营开发团队"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章