屠榜各大CV任务,微软开源的Swin Transformer有多强?

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近日,微软 Swin Transformer 代码正式开源,短短两天就在 GitHub 上获得了 1.9k 的 Star,相关话题在知乎上同样引起了广泛的讨论和关注。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"微软 Swin Transformer 正式开源"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 可以理解为一个通用的视觉骨干网络,Swin Transformer 设计出了一种分层表示形式,首先由小的 PATCHES 开始,而后逐渐将相邻的各 Patches 融合至一个更深的 Transformer 层当中。通过这种分层结构,Swin Transformer 模型能够使用 FPN 及 UNET 等高密度预测器实现技术升级,通过限制非重叠窗口中的部分计算量降低计算强度。各窗口中的 PATCHES 数量是固定的,因此随着图像尺寸的增加,复杂性也将不断增长。相比于 ViT,Swin Transfomer 计算复杂度大幅度降低,随着输入图像大小线性计算复杂度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 的核心设计是将 Self-Attention 层划分为 SHIFT,SHIFTED Window 接入上一层窗口,二者之间保持连接,由此显著增强建模能力。这种策略还有助于有效降低延迟:所有 Query Patches 窗口共享同一 KEY 集,由此节约硬件内存;以往的 Transformer 方法由于使用不同的 Query 像素,因此在实际硬件中往往具有较高延迟;论文实验证明,SHIFTED Window 方法带来的延迟较传统滑动方法更低,且二者的建模能力基本相同。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于此,Swin Transformer 在各类回归任务、图像分类、目标检测、语义分割等方面具有极强性能,其性能优于 VIT\/DEIT 与 RESNE (X) T。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/34\/348f9c29d0267d43e478994067f910c4.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/microsoft\/Swin-Transformer","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/microsoft\/Swin-Transformer"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原作者团队曹越在知乎上的回答:"},{"type":"link","attrs":{"href":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"论文地址:"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/pdf\/2103.14030.pdf","title":"","type":null},"content":[{"type":"text","text":"https:\/\/arxiv.org\/pdf\/2103.14030.pdf"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"实现方法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer 的架构如下图所示。首先,它与 VIT 一样将输入的图片划分为多个 Patches,作者使用 4 x 4 的 patch 大小;之后是嵌入层,最后则是作者设计的 Swin Transformer BLOCK。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/75\/75f0733e7557aa3fce1168a9e57e1ea4.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Swin Transformer Block:此模块是 Transformer 中使用 Shifted Window 的多 ATTENTION 模块,具有一致性;Swin Transformer 模块中包含一个 MSA(多头 Attention)模块 SHIFTED WINDOW,之后是 2 层 MLP,接着将 Layernorm 层添加至各 MSA 模块与各 MLP 层内,而后是剩余连接。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于 Self-Attension 的 SHIFTED Window"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/55\/553aa65c2f080109bf38e830ab0e4cf5.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"标准 Transformer 使用全局 self-attention 以创建各令牌之间的关系,但这会令图像大小增加 2 倍、导致复杂性随之提升,因此不适用于处理高强度任务。为了提升建模效率,作者提出由“部分窗口计算 Self-Attention”,即假设各个窗口均包含 MXM Patches、全局 MSA 模块与基于窗口的 MSA 模块,并在 HXW Patches 的图像之上进行复杂度计算:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/29\/29575dcbc64a63979dd107d1aa4554ad.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)的复杂度为 PATCH 数量 HW 的次生增长;(2)则为线性增长,其中 m 为固定值,但全局 self-attention 为大 HW,因此基于窗口的 Self-Attention 计算量不会很高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"将 SHIFTED Window 拆分为连续块"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于 Windows 的 Self-Attention 虽然拥有线性增长的复杂度,但其缺少在各窗口之间的连接,因此限制了模型的建模能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了在容器上引入连接以实现模型维护,作者提出了 SHIFTED Window 分割的概念。第一个模块使用中性窗口分割,并根据窗口大小的 M=4 放大得出一个 8 x 8 的特征图。我们将此窗口划分为 2 x 2 的形式,而后通过 Shifted Window 设置下一模块,将其移动 M\/2 个像素。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 SHIFTED 结构,Swin Transformer Blocks 的计算如下所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/fb\/fb6ea395cbba9d600a64d6b31ecc95f2.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"W-MSA 与 SW-MSA 分别表示使用整齐分割与 SHIFTED SELF-ATTENTION。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Shifted Window 可以跨越多个窗口向模型添加连接,并保持模型的良好运行效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SHIFTED WINDOW 高效批量计算"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/32\/32e245ec9ce080c89a27c18bef86dd7a.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SHIFTED WINDOW 也有自己的问题,即导致窗口数量增加——由原本的 (h \/ m xw \/ m) 个增加到 ((h \/ m + 1) x (W \/ m + 1)) 个,而且其中某些窗口要小于 MXM。这种方法会将较小的窗口填充为 MXM,并计算 attention 值以掩蔽填充操作;当窗口数量较少、但大于 2 x 2 时,由此带来的额外计算量将相当可观(由 2 x 2 到 3 x 3,计算量增加达 2.25 倍)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,作者提出了一种更为高效的 BATCH 计算方法,即沿左上角执行 Cyclic Shift。在完成这项位移之后,Batch WINDOW 将拥有一张由非相邻子窗口组成的特征图,这相当于是使用 Cyclic-Shift、Batch-Windows 与整齐窗口分割限制子窗口内的 Self-Attention 数量,从而极大提高了计算效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"相对位置偏移:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作者介绍了 Self-Attention 中各头的相对位置。偏移 B:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/fe\/fe3da198a5fd4820d926d126656de143.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/22\/225a476662938ec91defada2c6283eab.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/d3\/d30cb491ff0bc3b778633712bb15a550.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这将显著提高性能表现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作者根据大小与计算复杂度建立起一套基本模型,名为 SWIN-B;同时引入了 SWIN-T、SWIN-S 与 SWIN-L,其模型大小与复杂度分别为 0.25 倍、0.5 倍与 2 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当我们将窗口大小 M 设定为 7 时,各头部查询 D 为 32,各扩展层的 mlp α为 4。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/c5\/c5aa7022d68ae43637249271dbaef3be.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中 C 为初始阶段隐藏层的通道数量。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"具体表现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"图像分类"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 1A,从零开始在 ImgeNet-1K 上进行训练:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/23\/236e9c565ec10b0916ac24d5a0b04519.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 1B,在 ImageNet-22k 上进行首轮训练,之后迁移至 ImageNet-1K:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/49\/49f415e98675eb915c49fb140f8af674.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目标检测"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 2A,在不同模型帧上使用 Swin Transformer 替代 BackBone:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/12\/127e963cf10e0e25bbd2194956927357.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 2C,与 SOTA 的比较结果:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/75\/7566a717839fd950ae6d7cf6061fd4d5.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"语义分割"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 3,对于较小模型,可提供比先前 SOTA SETR 更高的 MIOU:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/27\/2753294acd8e55801a40b7ea52ee56c1.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"消融实验"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 4,Shifted Window 在相对位置偏移中的性能改进:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/93\/939cfce2018167501ca908b13ea2135b.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 5,SHIFTED WINDOW 与 CYCLIC 高效执行:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/ba\/ba4de945c1c280892552b4426555361e.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表 6,不同 Self-Attentions 的比较:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/73\/73fdceee5450af60d14e80626f2e0f74.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Transformer 在 CV 上的应用前景有可能替代 CNN 吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ViT 的出现扩大了 Transformer 的使用范围,这也让 AI 领域的从业者开始关注 Transformer 与 CNN 的关系。在知乎上,一位用户提问称:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前已经有基于 Transformer 在三大图像问题上的应用:分类(ViT),检测(DETR)和分割(SETR),并且都取得了不错的效果。那么未来,Transformer 有可能替换 CNN 吗,Transformer 会不会如同在 NLP 领域的应用一样革新 CV 领域?后面的研究思路可能会有哪些呢?"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"关于这个问题,很多知乎答主已经给出了自己的答案,包括浙江大学控制科学与工程博士、复旦大学微电子学院硕士、阿里巴巴高级算法专家等,大体总结为未来是否会形成完全替代尚不好预测, 但对 CNN 的降维打击已经形成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"感兴趣的用户可以到知乎阅读各位答友的完整回答:"},{"type":"link","attrs":{"href":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.zhihu.com\/question\/437495132\/answer\/1800881612"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"参考链接:"},{"type":"link","attrs":{"href":"https:\/\/www.programmersought.com\/article\/61847904617\/","title":"","type":null},"content":[{"type":"text","text":"https:\/\/www.programmersought.com\/article\/61847904617\/"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章