官宣!达摩院开源秘藏深度语言模型体系AliceMind,NLP正在走向大工业时代

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6月22日,InfoQ获悉,阿里巴巴达摩院已正式开源深度语言模型体系AliceMind。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"开源地址:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/github.com\/alibaba\/AliceMind","title":"","type":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"https:\/\/github.com\/alibaba\/AliceMind"}],"marks":[{"type":"italic"},{"type":"strong"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ee\/5f\/ee1b12ef6f5b1f2ecafd5cd3594a955f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"达摩院开源顶级语言AI —AliceMind"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind是什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一句话介绍,AliceMind是业界领先的预训练语言模型体系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"字面含义:AliceMind, Alibaba's Collection of Encoder-decoders from MinD (Machine Intelligence of Damo)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具体来说,预训练语言模型是当前自然语言处理(NLP)领域的研究热点之一,“预训练+精调”已成为NLP任务的新范式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"阿里巴巴达摩院作为最早投入预训练语言模型研究的团队之一,历经三年研发出深度语言模型体系AliceMind, 包括通用语言模型StructBERT、多语言VECO、生成式PALM、多模态StructVBERT、结构化StructuralLM、知识驱动LatticeBERT、机器阅读理解UED、超大模型PLUG等模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind先后登顶了GLUE、CLUE、XTREME、VQA Challenge、DocVQA、MS MARCO在内的自然语言处理领域的的六大权威榜单,领先业界,相关工作论文被AI\/NLP顶会接收。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"今年6月19日,AliceMind在6月19日再次登顶多模态权威榜单VQA Challenge 2021,这个比赛类似看图问答,给定一张图像和关于图像的自然语言问题,AI需要提供准确的自然语言答案。AliceMind战胜了微软、Facebook等几十家国际顶尖团队,将纪录从去年第一名的76.36%显著提升到79.78%,接近人类水平(80.78%)。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind有何领先之处?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、覆盖全面:覆盖多语言、多模态、结构化等多个预训练语言模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、技术领先:多个模型在世界榜单中排名靠前"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3、开放普惠:将围绕Pre-training+Fine-tuning(“预训练+精调”)语言模型持续进行生态性的技术开源"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind有何创新之处?"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、通用语言模型(StructBERT)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Google于2018年底推出的BERT模型是业界广泛使用的自然语言预训练模型,达摩院团队在BERT的基础上提出优化模型StructBERT,让机器更好地掌握人类语法,理解自然语言,2020年多次在自然语言处理领域顶级赛事GLUE Benchmark上夺冠。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructBERT通过在句子级别和词级别引入两个新的目标函数,好比给机器内置一个“语法识别器”,使机器在面对语序错乱或不符合语法习惯的词句时,仍能准确理解并给出正确的表达和回应,大大提高机器对词语、句子以及语言整体的理解力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6c\/4f\/6cfc59f8319c53e0899208a10640444f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、多语言语言模型(VECO)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"跨语言预训练初衷是为多种语言建立起一个统一联合的语义表示,AliceMind体系内的跨语言预训练模型VECO一经提出,便在国际权威多语言榜单XTREME排名第一,远超Facebook、Microsoft等业界代表性模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"VECO目前支持100种语言的理解和生成任务。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"VECO效果亮眼,主要是因为两项创新:一是其可以更加“显式”得进行跨语言信息的建模(图1);二是VECO在预训练的过程充分学习用于语言理解(NLU)和生成(NLG)任务,并让二者互相学习提高彼此(图2)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ff\/f3\/ff10a40bf651e19bd11aefc8343396f3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图1"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ed\/b3\/edcf27a266b469d3c3ab9008f644dab3.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图2"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,VECO模型成为了多语言领域内的第一个同时在多语言理解(NLU)和语言生成(NLG)任务上均取得业内最佳效果的模型,也被顶会ACL2021录用。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、生成式语言模型(PALM)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM采用了与之前的生成模型不同的预训练方式,将预测后续文本作为其预训练目标,而非重构输入文本。PALM在一个模型中使用自编码方式来编码输入文本,同时使用自回归方式来生成后续文本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这种预测后续文本的预训练促使该模型提高对输入文本的理解能力,从而在下游的各个语言生成(NLG)任务上取得更好的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM在MARCO NLG自然语言生成公开评测上取得了排行榜第一,同时在摘要生成标准数据集CNN\/DailyMail和Gigaword上也超过了现有的各个预训练生成语言模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM可被用于问答生成、文本复述、回复生成、文本摘要、Data-to-Text等生成应用上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6d\/8f\/6dd5af04897f3dbdbd99d27717ef318f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4、多模态语言模型(StructVBERT)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructVBERT是在通用的StructBERT模型基础上,同时引入文本和图像模态,在统一的多模态语义空间进行联合建模,在单流架构的基础上同时引入图像-文本描述数据和图像问答数据进行多任务预训练,并在多尺度的图像特征上进行分阶段预训练。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,模型利用Transformer encoder-decoder结构提升跨模态双流建模能力,结合单流、双流结构的优点进一步提升模型对文本和图像两个模态的理解能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/f6\/90\/f63416613e72707d841d13ecaca50990.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5、结构化语言模型(StructuralLM)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructuralLM在语言模型StructBERT的基础上扩展到结构化语言模型,充分利用图片文档数据的2D位置信息,并引入box位置预测的预训练任务,帮助模型感知图片不同位置之间词语的关系,这对于理解真实场景中的图片文档十分重要。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Structural LM模型在DocVQA榜单上排名第一,同时在表单理解FUNSD数据集和文档图片分类RVL-CDIP数据集上也超过现有的所有预训练模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/e0\/1e\/e08bb137832a88d2dc16a6dcayye7d1e.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6、机器阅读理解模型(UED)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"自最开始声名大噪的SQuAD榜单起,阿里围绕着机器阅读理解发展路线:单段落抽取->多文档抽取\/检索->多文档生成->开放式阅读理解,拿下了一系列的榜单冠军:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2018年在单段落机器阅读理解领域顶级赛事SQuAD上首次超出人类回答精准率;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2018年在多文档机器阅读理解权威比赛TriviaQA和DuReader上双双刷新纪录,取得第一名;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019年在信息检索国际顶级评测TREC 2019 Deep Learning Track上的段落检索和文档检索任务上均取得第一名;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019年在机器阅读理解顶级赛事MS MARCO的段落排序、多文档答案抽取以及多文档答案生成3个任务均取得第一名,并在多文档答案抽取任务上首次超越人类水平;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/d9\/25\/d9819a4557fe704fd4b39b9421faf425.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"7、超大规模中文理解和生成统一模型(PLUG)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PLUG是目前中文社区已开放API的最大规模的纯文本预训练语言模型,集语言理解与生成能力于一身。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PLUG可为目标任务做针对性优化,通过利用下游训练数据finetune模型使其在该特定任务上生成质量达到最优,弥补之前其它大规模生成模型few-shot inference的生成效果不足,适于应用在实际生成任务。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同时,PLUG采用encoder-decoder的双向建模方式,因此,在传统的zero-shot生成的表现上,无论是生成的多样性,领域的广泛程度,还是生成长文本的表现,较此前的模型均有明显的优势。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6e\/2d\/6e234yy85fdebfd9318fdb4aeb14a62d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"8.知识驱动的语言模型LatticeBERT"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LatticeBERT在预训练模型中训练中有效地融合了词典等知识,从而能够同时建模字和词的结构,来线性化地表示这种混合粒度的输入。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步是将涵盖多粒度字词信息的中文文本用词格(Lattice)表示起来,再把这个词格线性化作为BERT的输入。LatticeBERT在2020年9月达到中文予以理解评估基准CLUE榜单的base模型中的第一名。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/19\/5e\/19eb5597db5327f0e5ffa88191a4d95e.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind的应用情况"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind具有阅读、写作、翻译、问答、搜索、摘要生成、对话等多种能力,目前已成为阿里的语言技术底座,日均调用量超过50亿次,活跃场景超过200个,已在跨境电商、客服、广告等数十个核心业务应用落地。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind已上线到内部平台,开箱即用,目前支持继续训练,精调,蒸馏,测试,部署五大功能,只需简单操作即可完成语言模型从训练到部署的完整链路。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在阿里之外,AliceMind广泛运用于医疗、能源、金融等多个行业。其中,浙江电网公司以AliceMind为底座为员工构建智能化运维平台,应用于变压器检修、供电抢修等业务,已经开始在国家电网公司统一推广。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind开源有什么意义?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"传统NLP模型制作复杂,耗时耗力,且用途单一,难以复用,犹如手工作坊。但近几年兴起的预训练语言模型,正在改变局面,有望让语言AI走向入可规模化复制的工业时代。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果用炼钢来类比,以前要获得一个可用的NLP应用模型,要从铁矿石开始炼钢,周期长,费用高,产量低;但现在有了开源的预训练语言模型,相当于有了现成的粗钢,只需要把粗钢炼成所需的特定钢材,效率大为提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"阿里达摩院深度语言模型团队负责人黄松芳表示,“"},{"type":"text","marks":[{"type":"strong"}],"text":"预训练语言模型已成为NLP领域的基石和原材料,AliceMind开源将降低NLP领域研究和应用创新的门槛,助推行业从手工业时代走向大工业时代。"},{"type":"text","text":"”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"开源地址:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http:\/\/#license","title":"","type":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"https:\/\/github.com\/alibaba\/AliceMind\/"}],"marks":[{"type":"italic"},{"type":"strong"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章