微軟亞研院:如何看待計算機視覺未來的走向?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文分享自百度開發者中心","attrs":{}},{"type":"link","attrs":{"href":"https://developer.baidu.com/article.html#/articleDetailPage?id=293504?from=010726","title":"","type":null},"content":[{"type":"text","text":"https://developer.baidu.com/article.html#/articleDetailPage?id=293504?from=010726","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"先說一個現象:","attrs":{}},{"type":"text","text":"在深度學習的驅動下,計算機已經在多個圖像分類任務中取得了超越人類的優異表現。但面對一些不尋常的圖像,以“深度”著稱的神經網絡還是無法準確識別。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"再說一個現象:","attrs":{}},{"type":"text","text":"人類的視覺系統是通過雙眼的立體視覺來感知深度的。通過大量實際場景的經驗積累以後,人類可以在只有一張圖像的情況下,判斷圖像中物體的前後距離關係。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,計算機視覺有一種未來走向是:“借用”人類視覺的特點,設計模型。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以領域爲例。在計算機視覺領域,單目深度估計試圖模擬人類的視覺,旨在在只有一張圖像作爲輸入的情況下,預測出每個像素點的深度值。單目深度估計是 3D 視覺中一個重要的基礎任務,在機器人、自動駕駛等多個領域都具有廣泛的應用,是近年來的研究熱點。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前通用的解決方案是依賴深度學習強大的擬合能力,在大量數據集上進行訓練,試圖獲取深度估計的能力。這一“暴力”解法儘管在某些特定數據集的測試場景上取得了優異的結果,但是網絡的泛化能力較差,很難遷移到更一般的應用情形,無法適應不同的光照條件、季節天氣,甚至相機參數的變化。其中一個具體的例子就是,相同的場景在不同光照條件下的輸入圖像,經過同一個深度估計網絡,會出現截然不同的預測結果。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"造成這一結果的原因在於,從人類感知心理學的相關研究中可以發現人的視覺系統更傾向於利用形狀結構特徵進行判斷,而卷積神經網絡則更依賴紋理特徵進行判斷。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如,給定一隻貓的圖像,保留貓的輪廓,再使用大象的紋理去取代貓的皮毛紋理,人類傾向於認爲圖像的類別是貓,但是網絡卻會判定爲大象。這種不一致性,會導致網絡強行學習到的規律和人類不一致,很難完成對人類視覺系統的模擬。具體到深度估計領域,圖像的紋理變化,例如不同的光照、天氣、季節造成的影響都會對模型產生較大的影響。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b4/b4bbf308c3781f0e667a7a006a46e3c3.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圖1:(a)大象紋理圖像;(b)貓圖像;(c)用大象紋理取代貓皮毛紋理的圖像。圖片來源:","attrs":{}},{"type":"link","attrs":{"href":"https://openreview.net/pdf?id=Bygh9j09KX","title":null,"type":null},"content":[{"type":"text","text":"https://openreview.net/pdf?id=Bygh9j09KX","attrs":{}}]}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另一個更爲嚴重的問題,是網絡容易根據局部的顏色信息來進行判斷,而不是根據圖像整體的佈局。比如,深度網絡會把前方路面上的白色卡車誤認爲是白雲,將較近距離的卡車判斷爲較遠距離的雲,這種誤判在自動駕駛場景中非常致命,會導致車輛無法對白色卡車進行合理規避,釀成嚴重事故。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"將人類視覺用於深度估計","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如何解決上述兩個“致命”問題,從而提高深度神經網絡的泛化能力?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"儘管“誤判”問題可以通過擴大訓練數據集來緩解,但是收集數據本身會帶來大量的人力、物力成本。而使用計算機圖形圖像學技術雖然可以以較低的成本生成大量的訓練數據,但是由於合成數據和真實數據存在色彩色調不一致的情況,所以合成數據集上訓練的深度估計網絡也很難泛化到實際應用場景中。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,微軟亞洲研究院的研究員們提出了一個更通用的解決思路:模仿人類視覺系統。相關工作“S2R-DepthNet: Learning a Generalizable Depth-specific Structural Representation”(論文鏈接:","attrs":{}},{"type":"link","attrs":{"href":"https://arxiv.org/pdf/2104.00877.pdf%EF%BC%89","title":null,"type":null},"content":[{"type":"text","text":"https://arxiv.org/pdf/2104.00877.pdf)","attrs":{}}]},{"type":"text","text":" 已被 CVPR 2021 接受。通過結合人類的視覺系統特點,該工作探究了網絡進行單目深度估計的本質,並賦予了網絡強大的深度估計泛化能力。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9e/9e88cb27d4168070eb31547d6ab803ad.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體的研究思路是:考慮到人類視覺系統更依賴結構信息來進行感知,例如人可以從僅包含結構信息的草圖中獲取場景的深度信息,研究員們通過對圖像中的結構信息和紋理信息進行解耦,先提取圖像中的結構信息,去除無關的紋理信息,再基於結構信息進行深度估計。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這樣設計的深度估計網絡去除了對紋理信息的影響,可以做到更強的泛化能力。論文中的模型(S2R-DepthNet, Synthesic to Real Depth Network),僅在合成數據上進行訓練,不接觸任何目標域的真實圖像,所得到的模型無需任何額外操作就可以直接在實際的數據集上取得很好的深度估計效果。該方法遠超基於域遷移(Domain Adaptation)的方法。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"S2R-DepthNet 的網絡結構爲了獲得深度特定的結構表示,利用提出的結構提取模塊 STE 從圖像中提取出通用的結構表徵,如圖2所示。可是此時得到的結構表示是一個通用的並且低級的圖像結構,其中包含了大量與深度無關的結構信息。例如平滑表面的結構(車道線或者牆上的照片)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/32/3247d3fc1c86269b68759478a65d7682.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圖2:整體網絡架構","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以研究員們進一步提出了一個深度特定的注意力模塊 DSA 去預測一個注意力圖,以抑制這些與深度無關的結構信息。由於只有深度特定的結構信息輸入到了最終的深度預測網絡中,因此,訓練“成熟”的 S2R-DepthNet 泛化能力極強,能夠“覆蓋”沒見過的真實數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"STE 模塊目的是爲了從不同風格的圖像中提取領域不變的結構信息。如圖3所示,STE 模塊包含了一個編碼器 Es 去提取結構信息,和一個解碼器 Ds 去解碼編碼的結構信息到結構圖。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/51/5136b16f633e04fd0454635fa6742220.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圖3:STE 模塊編碼器 Es","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"訓練結構圖如圖4所示,研究員們利用了圖像到圖像轉換的框架去訓練 STE 的編碼器 Es。而爲了使得網絡可以適應多個風格的圖像,並將通用的圖像結構從圖像中解耦出來,研究員們用一個風格數據集Painter By Numbers (PBN)作爲目標域,合成數據作爲源域,通過共有的結構編碼器和兩個私有的風格編碼器,分別編碼出源域和目標域的結構信息和風格信息。再利用圖像自重建損失、潛層特徵自重建損失和對抗損失結合的方式將結構信息和風格信息解耦。通過這種方式訓練的結構編碼器可以編碼出通用的結構信息。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6f/6f84face39c9aa4292ecbdd871920614.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圖4:真實圖像和合成圖像的結構圖展示","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了訓練 STE 模塊的解碼器,研究員們在其後加了一個深度估計網絡,通過對預測的深度施加損失,便可以通過結構圖預測出深度圖。此外研究員們還用了一個啓發性的損失函數,施加在結構圖上,以突出結構圖中深度相關的區域。如以下公式所示。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/94/9416af5764e8dc93eb5ffa8f1afb0565.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提取出的結構圖是通用的結構圖,不但包含深度相關的結構信息,同時也包含與深度無關的結構信息,因此通過提出深度相關注意力模型預測注意力圖,可以有效地抑制與深度無關的結構信息。由於結構編碼器中包含了多個 IN 層,導致其損失了很多判別特徵,很難包含語義信息,因此設計的深度相關注意力模塊使用了大量的膨脹卷積,可以有效在保持分辨率的情況下增大感受野。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過上述注意力模塊,研究員們可以得到與深度相關的結構化表示。直接輸入到深度估計網絡中,便可進行深度預測,從而在不同領域之間進行遷移。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究員們可視化了學到的通用結構表示和深度特定的結構表示,如圖2所示,即使合成數據和真實數據在圖像上有明顯的不同,學到的結構圖和深度特定的結構表示也可以共享很多相似性。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"該方法的量化結果如表格1所示。域遷移方法在使用合成數據訓練的過程中,加入了目標域的真實場景圖像,此方法在訓練過程中只用了合成數據圖像,已取得了顯著的泛化能力的提升。其原因在於抓住了深度估計任務結構化表示的本質特徵。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/24/244d1673b343b6389b1dc40841d4747f.jpeg","alt":"pgsource1940ef5c.jpg","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表1:合成數據到真實數據深度估計結果","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究員們提出的結構化表徵方法更復合人類視覺系統的特點,因此可以將其推廣到其它任務,例如圖像分類、圖像檢測和圖像分割等。同時,研究員們也將整個訓練過程進行了簡化,將所有的結構化表徵學習通過一個基於 ResNet 的 backbone 網絡來進行實現,通過在 ImageNet 上訓練,該模型在多個下游任務(分類、檢測和分割)的測試中,均取得了目前最優的模型泛化能力。其相關工作已投稿 NeurIPS 2021,論文和代碼將於近期公開。論文標題:S2R-DepthNet: Learning a Generalizable Depth-specific Structural Representation地址:","attrs":{}},{"type":"link","attrs":{"href":"https://arxiv.org/pdf/2104.00877.pdf","title":null,"type":null},"content":[{"type":"text","text":"https://arxiv.org/pdf/2104.00877.pdf","attrs":{}}]},{"type":"text","text":"代碼:","attrs":{}},{"type":"link","attrs":{"href":"https://github.com/microsoft/S2R-DepthNet","title":null,"type":null},"content":[{"type":"text","text":"https://github.com/microsoft/S2R-DepthNet","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://developer.baidu.com/?from=010726","title":"","type":null},"content":[{"type":"text","text":"https://developer.baidu.com/?from=010726","attrs":{}}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章