「生髮」那些事兒:MT Lab落地多個頭發生成項目

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當逐漸後移的髮際線和日益稀疏的劉海成爲焦慮的源頭,爲了滿足這屆用戶對於濃密秀髮的嚮往,多年深耕人工智能領域的美圖公司技術大腦——美圖影像實驗室(MT Lab)基於在深度學習領域積累的技術優勢,落地了多個頭發生成項目並實現了高清真實的頭髮紋理生成,目前已率先在美圖旗下核心產品美圖秀秀及海外產品AirBrush上線劉海生成、髮際線調整與稀疏區域補發等功能,滿足用戶對髮型的多樣化需求。其中,劉海生成功能可以基於自定義的生成區域,生成不同樣式的劉海(如圖1.1-1.3);髮際線調整功能在保持原有髮際線樣式的情況下,可以對髮際線的不同高度進行調整(如圖2.1-2.2);稀疏區域補發則可以在指定區域或者智能檢測區域中,自定義調整稀疏區域的頭髮濃密程度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/4e\/4e98ef8a2349092f2b31c52f1f465c5a.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 1.1 劉海生成(左:原圖,右:全劉海生成效果圖)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ce\/cee91027bca70cc20b7bb7475155d450.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 1.2 劉海生成(左:原圖,右:全劉海生成效果圖)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f4\/f4f34daf5c3c8c27400b8161136fcefe.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 1.3 多款劉海生成效果圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/f7\/f772c2bd9e57fe040a12d2ff5770c601.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 2.1 髮際線調整前後對比圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/fb\/fbf51b73961d22af0201218039cab802.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 2.2 髮際線調整對比圖"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"美圖頭髮生成任務全流程"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"頭髮生成任務面臨的挑戰"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"頭髮編輯作爲一般的生成任務,在落地實踐過程中仍面臨幾個亟待突破的關鍵技術瓶頸:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先是生成數據的獲取問題。以劉海生成任務爲例,在生成出特定款式的劉海時,一個人有無劉海的數據是最爲理想的配對數據,但這種類型的真實數據獲取的可能性極低。與此同時,如果採用針對性收集特定款式劉海數據,以形成特定屬性非配對數據集的方式,那麼獲取高質量且多樣式的數據就需要耗費較高的成本,基本不具備可操作性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其次是高清圖像細節的生成問題。由於頭髮部位擁有複雜的紋理細節,通過CNN難以生成真實且達到理想狀態的髮絲。其中,在有配對數據的情況下,雖然可以通過設計類似Pixel2PixelHD"},{"type":"katexinline","attrs":{"mathString":"^[1]"}},{"type":"text","text":"、U2-Net"},{"type":"katexinline","attrs":{"mathString":"^[2]"}},{"type":"text","text":"等網絡進行監督學習,但目前通過該方式生成的圖像清晰度仍然非常有限;而在非配對數據情況下,一般通過類似HiSD"},{"type":"katexinline","attrs":{"mathString":"^[3]"}},{"type":"text","text":"、StarGAN"},{"type":"katexinline","attrs":{"mathString":"^[4]"}},{"type":"text","text":"、CycleGAN"},{"type":"katexinline","attrs":{"mathString":"^[5]"}},{"type":"text","text":"的方式進行屬性轉換生成,利用該方式生成的圖片不僅清晰度不佳,還存在目標效果生成不穩定、生成效果不真實等問題。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對上述情況, MT Lab基於龐大的數據資源與突出的模型設計能力,藉助StyleGAN"},{"type":"katexinline","attrs":{"mathString":"^[6]"}},{"type":"text","text":"解決了頭髮生成任務所面臨的配對數據生成與高清圖像細節兩大核心問題。StyleGAN作爲當前生成領域的主要方向——Gan(生成式對抗網絡)在圖像生成應用中的主要代表,是一種基於風格輸入的無監督高清圖像生成模型。能夠基於7萬張1024*1024的高清人臉圖像訓練數據FFHQ,通過精巧的網絡設計與訓練技巧生成清晰逼真的圖像效果。此外,StyleGAN還能基於風格輸入的方式擁有屬性編輯的能力,通過隱變量的編輯,實現圖像語意內容的修改。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ca\/cad57b058eebd6041ea0cdd396fee6bc.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 3 基於StyleGAN生成的圖片"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於StyleGAN的頭髮編輯方案"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"1.配對數據生成"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StyleGAN生成配對數據最爲直接的方式就是在w+空間直接進行相關屬性的隱向量編輯,生成相關屬性,其中隱向量編輯方法包括GanSpace"},{"type":"katexinline","attrs":{"mathString":"^[7]"}},{"type":"text","text":"、InterFaceGAN"},{"type":"katexinline","attrs":{"mathString":"^[8]"}},{"type":"text","text":"及StyleSpace"},{"type":"katexinline","attrs":{"mathString":"^[9]"}},{"type":"text","text":"等等。但這種圖像生成方式通常隱含着屬性向量不解耦的情況,即在生成目標屬性的同時往往伴隨其他屬性(背景和人臉信息等)產生變化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,MT Lab結合StyleGAN Projector"},{"type":"katexinline","attrs":{"mathString":"^[6]"}},{"type":"text","text":"、PULSE"},{"type":"katexinline","attrs":{"mathString":"^[10]"}},{"type":"text","text":"及Mask-Guided Discovery"},{"type":"katexinline","attrs":{"mathString":"^[11]"}},{"type":"text","text":"等迭代重建方式來解決生成頭髮配對數據的問題。該方案的主要思路是通過簡略編輯原始圖片,獲得一張粗簡的目標屬性參考圖像,將其與原始圖像都作爲參考圖像,再通過StyleGAN進行迭代重建。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以爲頭髮染淺色髮色爲例,需要先對原始圖片中的頭髮區域染上統一的淺色色塊,經由降採樣獲得粗略編輯簡圖作爲目標屬性參考圖像,在StyleGAN的迭代重建過程中,生成圖片在高分辨率尺度下與原始圖片進行相似性監督,以保證頭髮區域以外的原始信息不發生改變。另一方面,生成圖片通過降採樣與目標屬性參考圖像進行監督,以保生成的淺色髮色區域與原始圖片的頭髮區域一致,二者迭代在監督平衡下生成期望中的圖像,與此同時也獲得了一個人有無淺色頭髮的配對數據(完整流程參考圖4)。值得強調的是,在該方案執行過程中既要保證生成圖片的目標屬性與參考圖像一致,也要保證生成圖像在目標屬性區域外與原始圖片信息保持一致;還需要保證生成圖像的隱向量處於StyleGAN的隱向量分佈中,才能夠確保最終的生成圖像是高清圖像。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/1d\/1ddef5863ab22c2189ade8d988a6d85a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 4 染淺色頭髮StyleGAN迭代重建示意圖"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,基於該方案的思路,在頭髮生成領域還可以獲取到髮際線調整的配對數據(如圖5)、劉海生成的配對數據(如圖6)以及頭髮蓬鬆的配對數據(如圖7)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3f\/3f59ed36fc3fa9c4096093b4612a5576.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 5 髮際線配對數據"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/80\/80c03e649105bb7f9a6c01bbc79195e3.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 6 劉海配對數據"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/19\/19f25e9a419b3567f82a678f5f729cfb.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 7 頭髮蓬鬆配對數據"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"2.配對數據增益"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於迭代重建,還能夠獲得配對數據所對應的StyleGAN隱向量,通過隱向量插值的方式還能實現數據增益,進而獲得足夠數量的配對數據。以髮際線調整的配對數據爲例,如圖8所示,(a)和(g)是一組配對數據,(c)和(i)是一組配對數據,在每一組配對數據間,可以通過插值獲得髮際線不同程度調整的配對數據。如(d)和(f)分別是(a)和(g)、(c)和(i)之間的插值。同樣的,兩組配對數據間也可以通過隱向量插值獲得更多配對數據。如(b)和(h)分別是(a)和(c)、(g)和(i)通過插值獲得的配對數據。此外,通過插值獲得的配對數據也能夠生成新的配對數據,如(e)是(b)和(h)通過差值獲得的配對數據,基於此可以滿足對理想的髮際線調整配對數據的需求。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/e8\/e8427523473b3062a4cda1fee1ac38d7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 8 配對數據增益"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"3.image-to-image生成"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於StyleGan的迭代重建獲得配對數據後,就可以通過pixel2piexlHD模型進行有監督的學習訓練,這種image-to-image的方式相對穩定且具有魯棒性,但生成圖像的清晰度還無法達到理想的效果,因此選擇通過在image-to-image模型上採用StyleGAN的預訓練模型來幫助實現生成細節的提升。傳統的StyleGAN實現image-to-image的方式是通過encoder網絡獲得輸入圖的圖像隱向量,然後直接編輯隱向量,最後實現目標屬性圖像生成,但由這種方式生成的圖像與原圖像比對往往相似度較低,無法滿足基於原圖像進行編輯的要求。因此MT Lab對這種隱向量編輯的方式進行了改進,一方面直接將原圖像encode到目標屬性的隱向量,省去進行中間隱向量編輯的步驟,另一方面將encoder網絡的特徵與StyleGAN網絡的特徵進行融合,最終通過融合後的特徵生成目標屬性圖像,以最大限度保證生成圖像與原圖像的相似度,整體網絡結構與GLEAN"},{"type":"katexinline","attrs":{"mathString":"^[12]"}},{"type":"text","text":"模型非常相似,該方式兼顧了圖像高清細節生成與原圖相似度還原兩個主要問題,由此也完成了高清且具有真實細節紋理的頭髮生成全流程。(如圖9)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/97\/9774115d8e0e6c85cf55f2d78d34b7a7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 9 頭髮生成網絡結構"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於StyleGAN編輯生成方案的拓展"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於StyleGAN編輯生成方案能夠在降低生成任務方案設計難度的同時,提高生成任務的研發效率,實現生成效果的大幅度提升,與此同時也具有很高的擴展性。其中,結合StyleGAN生成理想頭髮配對數據的方式極大地降低了圖像編輯任務的難度,如將該方案關注的屬性拓展到頭髮以外,就能夠獲得更多屬性的配對數據,例如五官更換的配對數據(如圖10),藉此可以嘗試對任何人臉屬性編輯任務進行落地實踐。此外,藉助StyleGAN預訓練模型實現image-to-image的方式能夠保證生成圖像的清晰度,因此還可以將其推廣到如圖像修復、圖像去噪、圖像超分辨率等等更爲一般的生成任務中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b1\/b1807d97d02a8975e678cabfafbcba2a.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖10 五官更換的配對數據:原圖(左),參考圖(中),結果圖(右)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前, MT Lab已在圖像生成領域取得新的技術突破,實現了高清人像生成並達到精細化控制生成。在落地頭髮生成以外還實現了牙齒整形、眼皮生成、妝容遷移等人臉屬性編輯功能,還提供了AI換臉、變老、變小孩、更換性別、生成笑容等等風靡社交網絡的新鮮玩法,一系列酷炫玩法爲用戶帶來了更有趣、更優質的使用體驗,也展現了其背後強大的技術支持與研發投入。未來,深度學習仍將是MT Lab重點關注的研究領域之一,也將持續深入對前沿技術的研究,不斷深化行業技術創新與突破。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"參考文獻"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[1] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao,Jan Kautz, and Bryan Catanzaro. High-resolution image syn-thesis and semantic manipulation with conditional gans. In CVPR, 2018."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[2] Xuebin Qin, Zichen Zhang, Chenyang Huang, Masood Dehghan, Osmar R Zaiane, and MartinJagersand. U2-net: Going deeper with nested u-structure for salient object detection. Pattern Recognition, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[3] Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, Rongrong Ji. Image-to-image Translation via Hierarchical Style Disentanglement. InProc. In CVPR, 2021."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[4] Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified genera-tive adversarial networks for multi-domain image-to-image translation. In CVPR, 2018."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[5] Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified genera-tive adversarial networks for multi-domain image-to-image translation. In CVPR, 2018."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[6] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten,Jaakko Lehtinen, and Timo Aila. Analyzing and improvingthe image quality of StyleGAN. InProc. In CVPR, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[7] Erik H ?ark ?onen, Aaron Hertzmann, Jaakko Lehtinen, andSylvain Paris. Ganspace: Discovering interpretable gancontrols. In NIPS, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[8] Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Inter-preting the latent space of gans for semantic face editing. In CVPR, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[9] Zongze Wu, Dani Lischinski, and Eli Shecht-man. StyleSpace analysis: Disentangled controlsfor StyleGAN image generation. In arXiv, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[10] Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi,and Cynthia Rudin. Pulse: Self-supervised photo upsam-pling via latent space exploration of generative models. In CVPR, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[11] Mengyu Yang, David Rokeby, Xavier Snelgrove. Mask-Guided Discovery of Semantic Manifolds in Generative Models. In NIPS Workshop, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[12] K. C. Chan, X. Wang, X. Xu, J. Gu, and C. C. Loy, Glean: Generative latent bank for large-factor image super-resolution, In CVPR, 2021."}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章