視頻精修一幀要花2小時?AI只要5.3毫秒

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"進入全民短視頻時代,人像視頻的拍攝也正在邁向專業化。隨着固化審美的瓦解,十級磨皮的網紅濾鏡被打破,多元化的高級質感成爲新的風向標,“美”到每一幀是人們對動態視頻提出的更高要求。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,大部分手機均可記錄主流的24fps、25fps、30fps、50fps和60fps(frame per second,FPS),以常見的30FPS爲例,1分鐘的視頻就需要處理1800幀左右,如何保證處理過程中幀與幀之間的效果連續性是算法面臨的關鍵突破點。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事實上,傳統磨皮算法是一般實時美顏算法設計的優先選項,其本質是由各類高通濾波算法和圖像處理算法組合而成,通過濾波核的大小來實現人像的瑕疵祛除和膚質光滑,經過優化後也能夠達到移動端的實時性能要求,但經傳統磨皮算法處理後導致的五官與皮膚紋理細節缺失容易形成明顯的“假臉”效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/a6\/40\/a6b9e28d78226a596e212808cce30940.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖1:傳統磨皮算法VS美圖美顏算法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/cf\/b3\/cfc95c9435b5b1faa990b977103905b3.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖2:原圖VS美圖美顏算法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圍繞用戶更具個性化的“變美”需求,美圖影像研究院(MT Lab)自研基於深度學習的實時視頻美容方案。通過設計輕量的神經網絡生成式模型,結合強大的模型優化推理框架(Manis)和千萬級人像圖庫訓練優勢,實現對動態視頻人臉的瑕疵修復與暗沉祛除,同時最大程度地保留了皮膚的真實紋理細節。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/87\/e9\/873baa475526e9054bd2bdcb8d7f32e9.gif","alt":null,"title":"","style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/77\/71\/775c7fddf72fcf4fbc47ce722eb22171.gif","alt":null,"title":"","style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"原視頻VS美圖實時美顏算法效果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對比之下美圖的實時美顏算法既沒有弱化面部結構,對細微瑕疵也進行了精細化處理,臉部皮膚呈現乾淨通透、清晰自然的高級質感。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/2a\/14\/2a8969156c2c5b73ed644e9bbb713514.gif","alt":null,"title":"","style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/77\/71\/775c7fddf72fcf4fbc47ce722eb22171.gif","alt":null,"title":"","style":[{"key":"width","value":"25%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"傳統磨皮算法VS美圖實時美顏算法效果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,爲了兼顧更好的使用感受,輕量級的網絡能在低、中、高端不同檔位的移動端產品上實現更大範圍地部署,滿足移動端的實時性能要求,平均1秒鐘能夠美化處理視頻142幀,爲更多用戶帶來更好的“變美”體驗。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"輕量級模型設計,提升生成效果"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"輕量級結構設計策略"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在進行網絡結構設計時,首要考慮如何實現效果和速度的均衡。因此在保證不損失過多效果的前提下,模型結構儘量遵循了並行度高的設計原則,輕量級結構設計(如圖3)的具體策略如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"不使用大於3x3的卷積核,下采樣也使用stride=2的3x3卷積替代,因爲3x3卷積的計算速度遠高於其他大核卷積。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"模型中最大通道數不大於64,以減少大尺寸feature map的計算量。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"網絡輸入尺寸在不影響效果的前提下儘可能地縮小。同時,一定程度上減少輸入寬度,而不是使用1:1的輸入比例,因爲人像兩側存在與美顏無關的背景區域,要避免增加額外的計算量。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"上採樣使用最近鄰插值加3x3卷積替代反捲積和雙線性插值,以便於加速。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"非必要情況下儘量採用簡單的單路架構,只在stride=2卷積後加入Concate分支,因爲Add或者Concate操作雖然計算量很小,但是MAC很高;同時,網絡不使用ResBlock,以節省內存佔用。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/57\/c4\/57d37e01d3485610b5e451359d2147c4.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖3:美圖輕量級實時美化模型結構"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型生成效果提升方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了獲得更好的實時生成效果,MT Lab借鑑了RepVGG的重參數等價轉換思路,來進一步優化輕量級模型的組件重組流程(如圖4)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"該流程在模型訓練階段,對每個3x3卷積增加並行的1x1卷積分支和恆等映射分支;而在模型實際推理階段,則把對應的1x1卷積分支和恆等映射分支通過padding操作分別等價轉換成特殊的3x3卷積,根據卷積的線形可加性,再將參數合併到主分支的3x3卷積裏面。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這個方式相當於只增加模型訓練階段的網絡消耗以提升網絡生成效果,而在實際模型部署時增加的分支參數等價合併,並不會給網絡增加任何額外的計算量。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/9f\/aa\/9f77cf4aa5f34ca10bfd270b33790caa.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖4:模型組件重參數優化流程"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同時,爲了大幅提升網絡訓練效率,除了使用常規的重建感知Loss和像素級Loss外,MT Lab還借鑑對抗生成網絡的思路,設計相應的判別Loss來監督網絡,在微調(fine-tunning)階段對實時美化網絡進行修正,從而進一步優化模型的生成效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"判別Loss設計流程(如圖5)先對訓練數據標定出對應的斑痘、暗沉等瑕疵區域,作爲瑕疵mask。再使用參數多、結構深的大型網絡訓練出一個精準的瑕疵mask分割模型,作爲實時美化模型的判別網絡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在訓練實時美化網絡時,固定判別網絡的參數,將實時美化網絡輸出的結果作爲判別網絡的輸入,同時用一張全“0”mask作爲監督,要求判別網絡監督實時美化網絡不能生成有瑕疵區域的結果,從而達到提升美化效果的目的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/2b\/d0\/2b36f2a075973d19a2154251f8b210d0.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖5:判別Loss設計流程"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"優化實時體驗效果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"衆所周知,影響模型實時執行的因素包括圖片幀率、分辨率和功耗。視頻人像美化需要保持實時的高分辨率,模型的FeatureMap就會相應增大,再疊加美化模型內部的高計算量,導致整個推理過程幀率低且耗時長;同時,大量的圖像前後處理增加了整體的效果耗時和設備功耗,實時處理難以長時間維持穩定。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MT Lab基於自研的全平臺AI推理框架Manis,通過整合模型智能分發、紋理數據推理加速、效果疊加優化等多種技術方案,來完成美圖美化模型在移動端App的順利落地應用,爲用戶帶來最優的實時效果體驗。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於算力配置定製化模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲保證不同檔位的移動端產品均能獲得最佳體驗效果,MT Lab通過Manis的天樞平臺系統爲不同機型的設備能力下發定製的美化模型與AI配置,再通過AI推理框架(Manis)調度選擇最優算力執行推理過程,從而既能保證低端算力設備達到實時效果,也能實現高端算力設備更優品質的畫質表現。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"模型分發流程以不同設備最優性能的實現爲原則,在模型設計之前就與包括華爲、MTK、高通、蘋果在內的AI芯片廠商達成深度交流與合作,從而保證訓練後的模型結構和參數完全符合AI芯片的計算特性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GPU推理方面,Manis針對高通的GPU架構在紋理內存上的訪存能力較優的特點,選擇GL texture紋理推理計算方式;針對MTK設備在普通內存上的多種加速特性能力,選擇GL buffer紋理推理計算方式;而針對支持OpenCL規範的共享特性的高通GPU設備,則通過OpenCL和OpenGL上下文關聯,將GL texture與CL texture、GL buffer與CL buffer進行映射,實現OpenGL\/OpenCL混合執行,再利用渲染和計算方式的優勢,從而達到AI算法在GPU的最優調度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/0e\/96\/0ec30dd5a96b13e666860330057eb996.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖6:美圖天樞解決方案模型分發流程"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時美化模型優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"事實上,CPU和GPU數據交互同步是一件非常損耗性能的操作,功耗增加導致長時間的處理下容易出現掉幀現象。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對此,MT Lab在人臉檢測環節採用極速輕量的CPU推理,快速獲取人臉區域,通過局部的數據操作,降低FeatureMap大小的同時保留關鍵特徵圖信息,避免大數據量下GPU帶寬受限帶來的性能掉點問題;在圖像處理環節通過GPU數據流併發推理,弱化了高計算量帶來的負面影響。最後,基於雙通道數據流在局部區域上進行效果疊加優化,從而保證了視頻中每幀數據的高分辨率,呈現高品質的實時畫質。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/f4\/d1\/f42d362a7c46f62a263aa9bb6f16c3d1.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖7:紋理數據流加速策略"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"美圖優化加速器— —AI推理框架Manis"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於Manis的模型優化是視頻美化算法得以順利落地的核心環節,與此同時Manis還在美圖產品應用場景中扮演着更爲重要的角色。它既實現了移動端上極致性能優化,還服務於加速AI項目的落地生態打造。通過與主流開源框架的性能數據對比(如圖8),可以很明顯地感受到Manis所具備的高水平推理能力與性能提升能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/6a\/42\/6a9a721a939f774902489a73974de642.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖8:Manis與某開源框架第三季度最新版本的性能數據對比"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在實際應用中,Manis包含AI服務、天樞系統、運維監控等在內的多項功能,主要通過以下三個體系模塊來實現對算法的優化加速:"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型轉換模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以快速實現各主流模型結構向Manis模型結構轉換,以便算法順利接入。同時,通過圖優化技術簡化模型結構,爲各種執行設備如CPU、GPU、AI芯片添加優化控制手段,達到模型層面的性能優化。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型測試模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於 Manis在主流手機設備上的部署,能夠在線測試輸出模型在各種算力使用場景下的性能表現和評估信息,對模型算法進行快速驗證,從而幫助模型不斷迭代優化,同時縮短優化的開發週期。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型推理模塊"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Manis高度適配CPU、GPU、DSP、NPU、ANE、APU等多種硬件設備,其中GPU支持OpenGL、OpenCL、Metal、CUDA等多種技術方案,CPU支持fp32、fp16、bf16、int8等多種精度方案。 "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中,針對移動端設備的性能優化包括彙編級CPU neon優化、圖優化、Auto-Tuning、多線程優化以及算子融合;針對移動端的精度優化包含fp32\/fp16浮點計算方式、bf16格式計算策略以及8位整型量化計算方案,能夠結合推理的設備能力,進行動態圖切分及混合精度計算,釋放設備的最大算力。而針對類似實時美化這樣的複雜應用,則採用定製化的優化策略,包括內存複用策略、內存池、模型共享以及數據排布優化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/f7\/ae\/f7c1c8e8b253a047eff29a758d4e75ae.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"圖9:美圖AI推理框架(Manis)架構圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,美圖全部產品的應用落地場景都有着Manis的身影,它爲美圖核心AI算法在不同平臺和硬件上實現了低延遲、低內存、低功耗的應用落地,未來Manis也會進一步迭代和優化,拓展實時化應用上的更優性能。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章