屠榜各大CV任務,微軟開源的Swin Transformer有多強?

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近日,微軟 Swin Transformer 代碼正式開源,短短兩天就在 GitHub 上獲得了 1.9k 的 Star,相關話題在知乎上同樣引起了廣泛的討論和關注。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"微軟 Swin Transformer 正式開源"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 可以理解爲一個通用的視覺骨幹網絡,Swin Transformer 設計出了一種分層表示形式,首先由小的 PATCHES 開始,而後逐漸將相鄰的各 Patches 融合至一個更深的 Transformer 層當中。通過這種分層結構,Swin Transformer 模型能夠使用 FPN 及 UNET 等高密度預測器實現技術升級,通過限制非重疊窗口中的部分計算量降低計算強度。各窗口中的 PATCHES 數量是固定的,因此隨着圖像尺寸的增加,複雜性也將不斷增長。相比於 ViT,Swin Transfomer 計算複雜度大幅度降低,隨着輸入圖像大小線性計算複雜度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 的核心設計是將 Self-Attention 層劃分爲 SHIFT,SHIFTED Window 接入上一層窗口,二者之間保持連接,由此顯著增強建模能力。這種策略還有助於有效降低延遲:所有 Query Patches 窗口共享同一 KEY 集,由此節約硬件內存;以往的 Transformer 方法由於使用不同的 Query 像素,因此在實際硬件中往往具有較高延遲;論文實驗證明,SHIFTED Window 方法帶來的延遲較傳統滑動方法更低,且二者的建模能力基本相同。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於此,Swin Transformer 在各類迴歸任務、圖像分類、目標檢測、語義分割等方面具有極強性能,其性能優於 VIT\/DEIT 與 RESNE (X) T。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/34\/348f9c29d0267d43e478994067f910c4.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/microsoft\/Swin-Transformer","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/microsoft\/Swin-Transformer"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原作者團隊曹越在知乎上的回答:"},{"type":"link","attrs":{"href":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"論文地址:"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/pdf\/2103.14030.pdf","title":"","type":null},"content":[{"type":"text","text":"https:\/\/arxiv.org\/pdf\/2103.14030.pdf"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"實現方法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 的架構如下圖所示。首先,它與 VIT 一樣將輸入的圖片劃分爲多個 Patches,作者使用 4 x 4 的 patch 大小;之後是嵌入層,最後則是作者設計的 Swin Transformer BLOCK。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/75\/75f0733e7557aa3fce1168a9e57e1ea4.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer Block:此模塊是 Transformer 中使用 Shifted Window 的多 ATTENTION 模塊,具有一致性;Swin Transformer 模塊中包含一個 MSA(多頭 Attention)模塊 SHIFTED WINDOW,之後是 2 層 MLP,接着將 Layernorm 層添加至各 MSA 模塊與各 MLP 層內,而後是剩餘連接。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於 Self-Attension 的 SHIFTED Window"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/55\/553aa65c2f080109bf38e830ab0e4cf5.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"標準 Transformer 使用全局 self-attention 以創建各令牌之間的關係,但這會令圖像大小增加 2 倍、導致複雜性隨之提升,因此不適用於處理高強度任務。爲了提升建模效率,作者提出由“部分窗口計算 Self-Attention”,即假設各個窗口均包含 MXM Patches、全局 MSA 模塊與基於窗口的 MSA 模塊,並在 HXW Patches 的圖像之上進行復雜度計算:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/29\/29575dcbc64a63979dd107d1aa4554ad.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)的複雜度爲 PATCH 數量 HW 的次生增長;(2)則爲線性增長,其中 m 爲固定值,但全局 self-attention 爲大 HW,因此基於窗口的 Self-Attention 計算量不會很高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"將 SHIFTED Window 拆分爲連續塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於 Windows 的 Self-Attention 雖然擁有線性增長的複雜度,但其缺少在各窗口之間的連接,因此限制了模型的建模能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了在容器上引入連接以實現模型維護,作者提出了 SHIFTED Window 分割的概念。第一個模塊使用中性窗口分割,並根據窗口大小的 M=4 放大得出一個 8 x 8 的特徵圖。我們將此窗口劃分爲 2 x 2 的形式,而後通過 Shifted Window 設置下一模塊,將其移動 M\/2 個像素。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 SHIFTED 結構,Swin Transformer Blocks 的計算如下所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/fb\/fb6ea395cbba9d600a64d6b31ecc95f2.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"W-MSA 與 SW-MSA 分別表示使用整齊分割與 SHIFTED SELF-ATTENTION。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Shifted Window 可以跨越多個窗口向模型添加連接,並保持模型的良好運行效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SHIFTED WINDOW 高效批量計算"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/32\/32e245ec9ce080c89a27c18bef86dd7a.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SHIFTED WINDOW 也有自己的問題,即導致窗口數量增加——由原本的 (h \/ m xw \/ m) 個增加到 ((h \/ m + 1) x (W \/ m + 1)) 個,而且其中某些窗口要小於 MXM。這種方法會將較小的窗口填充爲 MXM,並計算 attention 值以掩蔽填充操作;當窗口數量較少、但大於 2 x 2 時,由此帶來的額外計算量將相當可觀(由 2 x 2 到 3 x 3,計算量增加達 2.25 倍)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,作者提出了一種更爲高效的 BATCH 計算方法,即沿左上角執行 Cyclic Shift。在完成這項位移之後,Batch WINDOW 將擁有一張由非相鄰子窗口組成的特徵圖,這相當於是使用 Cyclic-Shift、Batch-Windows 與整齊窗口分割限制子窗口內的 Self-Attention 數量,從而極大提高了計算效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"相對位置偏移:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作者介紹了 Self-Attention 中各頭的相對位置。偏移 B:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/fe\/fe3da198a5fd4820d926d126656de143.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/22\/225a476662938ec91defada2c6283eab.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/d3\/d30cb491ff0bc3b778633712bb15a550.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這將顯著提高性能表現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作者根據大小與計算複雜度建立起一套基本模型,名爲 SWIN-B;同時引入了 SWIN-T、SWIN-S 與 SWIN-L,其模型大小與複雜度分別爲 0.25 倍、0.5 倍與 2 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當我們將窗口大小 M 設定爲 7 時,各頭部查詢 D 爲 32,各擴展層的 mlp α爲 4。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/c5\/c5aa7022d68ae43637249271dbaef3be.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中 C 爲初始階段隱藏層的通道數量。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"具體表現"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖像分類"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 1A,從零開始在 ImgeNet-1K 上進行訓練:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/23\/236e9c565ec10b0916ac24d5a0b04519.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 1B,在 ImageNet-22k 上進行首輪訓練,之後遷移至 ImageNet-1K:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/49\/49f415e98675eb915c49fb140f8af674.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目標檢測"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 2A,在不同模型幀上使用 Swin Transformer 替代 BackBone:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/12\/127e963cf10e0e25bbd2194956927357.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 2C,與 SOTA 的比較結果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/75\/7566a717839fd950ae6d7cf6061fd4d5.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"語義分割"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 3,對於較小模型,可提供比先前 SOTA SETR 更高的 MIOU:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/27\/2753294acd8e55801a40b7ea52ee56c1.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"消融實驗"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 4,Shifted Window 在相對位置偏移中的性能改進:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/93\/939cfce2018167501ca908b13ea2135b.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 5,SHIFTED WINDOW 與 CYCLIC 高效執行:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/ba\/ba4de945c1c280892552b4426555361e.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 6,不同 Self-Attentions 的比較:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/73\/73fdceee5450af60d14e80626f2e0f74.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Transformer 在 CV 上的應用前景有可能替代 CNN 嗎?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ViT 的出現擴大了 Transformer 的使用範圍,這也讓 AI 領域的從業者開始關注 Transformer 與 CNN 的關係。在知乎上,一位用戶提問稱:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前已經有基於 Transformer 在三大圖像問題上的應用:分類(ViT),檢測(DETR)和分割(SETR),並且都取得了不錯的效果。那麼未來,Transformer 有可能替換 CNN 嗎,Transformer 會不會如同在 NLP 領域的應用一樣革新 CV 領域?後面的研究思路可能會有哪些呢?"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於這個問題,很多知乎答主已經給出了自己的答案,包括浙江大學控制科學與工程博士、復旦大學微電子學院碩士、阿里巴巴高級算法專家等,大體總結爲未來是否會形成完全替代尚不好預測, 但對 CNN 的降維打擊已經形成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"感興趣的用戶可以到知乎閱讀各位答友的完整回答:"},{"type":"link","attrs":{"href":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"參考鏈接:"},{"type":"link","attrs":{"href":"https:\/\/www.programmersought.com\/article\/61847904617\/","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.programmersought.com\/article\/61847904617\/"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章