官宣!達摩院開源祕藏深度語言模型體系AliceMind,NLP正在走向大工業時代

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6月22日,InfoQ獲悉,阿里巴巴達摩院已正式開源深度語言模型體系AliceMind。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"開源地址:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/github.com\/alibaba\/AliceMind","title":"","type":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"https:\/\/github.com\/alibaba\/AliceMind"}],"marks":[{"type":"italic"},{"type":"strong"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ee\/5f\/ee1b12ef6f5b1f2ecafd5cd3594a955f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"達摩院開源頂級語言AI —AliceMind"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind是什麼?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一句話介紹,AliceMind是業界領先的預訓練語言模型體系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"字面含義:AliceMind, Alibaba's Collection of Encoder-decoders from MinD (Machine Intelligence of Damo)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體來說,預訓練語言模型是當前自然語言處理(NLP)領域的研究熱點之一,“預訓練+精調”已成爲NLP任務的新範式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"阿里巴巴達摩院作爲最早投入預訓練語言模型研究的團隊之一,歷經三年研發出深度語言模型體系AliceMind, 包括通用語言模型StructBERT、多語言VECO、生成式PALM、多模態StructVBERT、結構化StructuralLM、知識驅動LatticeBERT、機器閱讀理解UED、超大模型PLUG等模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind先後登頂了GLUE、CLUE、XTREME、VQA Challenge、DocVQA、MS MARCO在內的自然語言處理領域的的六大權威榜單,領先業界,相關工作論文被AI\/NLP頂會接收。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"今年6月19日,AliceMind在6月19日再次登頂多模態權威榜單VQA Challenge 2021,這個比賽類似看圖問答,給定一張圖像和關於圖像的自然語言問題,AI需要提供準確的自然語言答案。AliceMind戰勝了微軟、Facebook等幾十家國際頂尖團隊,將紀錄從去年第一名的76.36%顯著提升到79.78%,接近人類水平(80.78%)。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind有何領先之處?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、覆蓋全面:覆蓋多語言、多模態、結構化等多個預訓練語言模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、技術領先:多個模型在世界榜單中排名靠前"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3、開放普惠:將圍繞Pre-training+Fine-tuning(“預訓練+精調”)語言模型持續進行生態性的技術開源"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind有何創新之處?"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、通用語言模型(StructBERT)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Google於2018年底推出的BERT模型是業界廣泛使用的自然語言預訓練模型,達摩院團隊在BERT的基礎上提出優化模型StructBERT,讓機器更好地掌握人類語法,理解自然語言,2020年多次在自然語言處理領域頂級賽事GLUE Benchmark上奪冠。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructBERT通過在句子級別和詞級別引入兩個新的目標函數,好比給機器內置一個“語法識別器”,使機器在面對語序錯亂或不符合語法習慣的詞句時,仍能準確理解並給出正確的表達和迴應,大大提高機器對詞語、句子以及語言整體的理解力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6c\/4f\/6cfc59f8319c53e0899208a10640444f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、多語言語言模型(VECO)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"跨語言預訓練初衷是爲多種語言建立起一個統一聯合的語義表示,AliceMind體系內的跨語言預訓練模型VECO一經提出,便在國際權威多語言榜單XTREME排名第一,遠超Facebook、Microsoft等業界代表性模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"VECO目前支持100種語言的理解和生成任務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"VECO效果亮眼,主要是因爲兩項創新:一是其可以更加“顯式”得進行跨語言信息的建模(圖1);二是VECO在預訓練的過程充分學習用於語言理解(NLU)和生成(NLG)任務,並讓二者互相學習提高彼此(圖2)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ff\/f3\/ff10a40bf651e19bd11aefc8343396f3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖1"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ed\/b3\/edcf27a266b469d3c3ab9008f644dab3.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖2"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,VECO模型成爲了多語言領域內的第一個同時在多語言理解(NLU)和語言生成(NLG)任務上均取得業內最佳效果的模型,也被頂會ACL2021錄用。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、生成式語言模型(PALM)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM採用了與之前的生成模型不同的預訓練方式,將預測後續文本作爲其預訓練目標,而非重構輸入文本。PALM在一個模型中使用自編碼方式來編碼輸入文本,同時使用自迴歸方式來生成後續文本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這種預測後續文本的預訓練促使該模型提高對輸入文本的理解能力,從而在下游的各個語言生成(NLG)任務上取得更好的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM在MARCO NLG自然語言生成公開評測上取得了排行榜第一,同時在摘要生成標準數據集CNN\/DailyMail和Gigaword上也超過了現有的各個預訓練生成語言模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PALM可被用於問答生成、文本複述、回覆生成、文本摘要、Data-to-Text等生成應用上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6d\/8f\/6dd5af04897f3dbdbd99d27717ef318f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4、多模態語言模型(StructVBERT)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructVBERT是在通用的StructBERT模型基礎上,同時引入文本和圖像模態,在統一的多模態語義空間進行聯合建模,在單流架構的基礎上同時引入圖像-文本描述數據和圖像問答數據進行多任務預訓練,並在多尺度的圖像特徵上進行分階段預訓練。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,模型利用Transformer encoder-decoder結構提升跨模態雙流建模能力,結合單流、雙流結構的優點進一步提升模型對文本和圖像兩個模態的理解能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/f6\/90\/f63416613e72707d841d13ecaca50990.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5、結構化語言模型(StructuralLM)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StructuralLM在語言模型StructBERT的基礎上擴展到結構化語言模型,充分利用圖片文檔數據的2D位置信息,並引入box位置預測的預訓練任務,幫助模型感知圖片不同位置之間詞語的關係,這對於理解真實場景中的圖片文檔十分重要。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Structural LM模型在DocVQA榜單上排名第一,同時在表單理解FUNSD數據集和文檔圖片分類RVL-CDIP數據集上也超過現有的所有預訓練模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/e0\/1e\/e08bb137832a88d2dc16a6dcayye7d1e.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6、機器閱讀理解模型(UED)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"自最開始聲名大噪的SQuAD榜單起,阿里圍繞着機器閱讀理解發展路線:單段落抽取->多文檔抽取\/檢索->多文檔生成->開放式閱讀理解,拿下了一系列的榜單冠軍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2018年在單段落機器閱讀理解領域頂級賽事SQuAD上首次超出人類回答精準率;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2018年在多文檔機器閱讀理解權威比賽TriviaQA和DuReader上雙雙刷新紀錄,取得第一名;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019年在信息檢索國際頂級評測TREC 2019 Deep Learning Track上的段落檢索和文檔檢索任務上均取得第一名;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019年在機器閱讀理解頂級賽事MS MARCO的段落排序、多文檔答案抽取以及多文檔答案生成3個任務均取得第一名,並在多文檔答案抽取任務上首次超越人類水平;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/d9\/25\/d9819a4557fe704fd4b39b9421faf425.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"7、超大規模中文理解和生成統一模型(PLUG)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PLUG是目前中文社區已開放API的最大規模的純文本預訓練語言模型,集語言理解與生成能力於一身。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PLUG可爲目標任務做針對性優化,通過利用下游訓練數據finetune模型使其在該特定任務上生成質量達到最優,彌補之前其它大規模生成模型few-shot inference的生成效果不足,適於應用在實際生成任務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同時,PLUG採用encoder-decoder的雙向建模方式,因此,在傳統的zero-shot生成的表現上,無論是生成的多樣性,領域的廣泛程度,還是生成長文本的表現,較此前的模型均有明顯的優勢。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6e\/2d\/6e234yy85fdebfd9318fdb4aeb14a62d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"8.知識驅動的語言模型LatticeBERT"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LatticeBERT在預訓練模型中訓練中有效地融合了詞典等知識,從而能夠同時建模字和詞的結構,來線性化地表示這種混合粒度的輸入。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步是將涵蓋多粒度字詞信息的中文文本用詞格(Lattice)表示起來,再把這個詞格線性化作爲BERT的輸入。LatticeBERT在2020年9月達到中文予以理解評估基準CLUE榜單的base模型中的第一名。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/19\/5e\/19eb5597db5327f0e5ffa88191a4d95e.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind的應用情況"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind具有閱讀、寫作、翻譯、問答、搜索、摘要生成、對話等多種能力,目前已成爲阿里的語言技術底座,日均調用量超過50億次,活躍場景超過200個,已在跨境電商、客服、廣告等數十個核心業務應用落地。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AliceMind已上線到內部平臺,開箱即用,目前支持繼續訓練,精調,蒸餾,測試,部署五大功能,只需簡單操作即可完成語言模型從訓練到部署的完整鏈路。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在阿里之外,AliceMind廣泛運用於醫療、能源、金融等多個行業。其中,浙江電網公司以AliceMind爲底座爲員工構建智能化運維平臺,應用於變壓器檢修、供電搶修等業務,已經開始在國家電網公司統一推廣。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AliceMind開源有什麼意義?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"傳統NLP模型製作複雜,耗時耗力,且用途單一,難以複用,猶如手工作坊。但近幾年興起的預訓練語言模型,正在改變局面,有望讓語言AI走向入可規模化複製的工業時代。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果用鍊鋼來類比,以前要獲得一個可用的NLP應用模型,要從鐵礦石開始鍊鋼,週期長,費用高,產量低;但現在有了開源的預訓練語言模型,相當於有了現成的粗鋼,只需要把粗鋼煉成所需的特定鋼材,效率大爲提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"阿里達摩院深度語言模型團隊負責人黃松芳表示,“"},{"type":"text","marks":[{"type":"strong"}],"text":"預訓練語言模型已成爲NLP領域的基石和原材料,AliceMind開源將降低NLP領域研究和應用創新的門檻,助推行業從手工業時代走向大工業時代。"},{"type":"text","text":"”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"開源地址:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http:\/\/#license","title":"","type":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"https:\/\/github.com\/alibaba\/AliceMind\/"}],"marks":[{"type":"italic"},{"type":"strong"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章