Deepfake防禦新思路有了!騰訊首次公開MagDR框架,已被AI頂會接收

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近日,計算機視覺領域世界三大頂會之一的 CVPR 2021 論文接收結果出爐,本次接收率約爲 27.3%,競爭十分激烈,騰訊安全研究團隊 Blade Team 以其在 AI 安全領域的發現而成功入選。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"騰訊首次公開 MagDR,爲對抗 Deepfake 提供了新思路"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次,騰訊 Blade Team 被收錄的論文題爲 "},{"type":"text","marks":[{"type":"strong"}],"text":"《MagDR:Mask-guided Detection and Reconstruction for Defending Deepfakes》"},{"type":"text","text":",該論文首次公開了一種能夠消除對抗樣本對 Deepfake 干擾攻擊的方法,該方法對防止深度僞造能力濫用提出了新思考。同時,也可用於提升 AI 圖像處理的安全性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"論文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2103.14211","title":"","type":null},"content":[{"type":"text","text":"https:\/\/arxiv.org\/abs\/2103.14211"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近年來,“AI 變臉”特效風靡全球,近期爆紅的“螞蟻呀嘿”再次掀起體驗和討論的熱潮,這種源自人工智能生成對抗網絡的新技術,能夠利用深度學習技術識別並交換圖片或視頻中的原始人像,不僅製作過程簡單,而且逼真度驚人,幾乎能達到以假亂真的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/4d\/4dcdbc8c3d59d3f6a2008d351d4f8056.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作爲一項技術工具,Deepfake 有廣泛的應用空間。語音合成能讓計算機用人類的聲音說出上百種語言,視頻合成能讓《速度與激情》裏的 Paul Walker 復生,但若被濫用,也將帶來巨大的風險,對身份識別和社會信任帶來挑戰,比如基於此衍生出來的一鍵脫衣應用 DeepNude。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼,既然能用技術“造假”,能否用更強有力的技術去對抗?此前行業有研究顯示,在源圖像中加入人眼無法感知的對抗攻擊,就能夠通過對抗噪聲來干擾 Deepfake 圖像的生成結果,也就是說,通過在原圖中加入人眼看不到的噪聲,換臉模型就無法生成正確人臉了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但這一對抗手段近期被證明仍有風險。騰訊 Blade Team 提出了一個全新的 MagDR(mask-guided detection and reconstruction)的二階段框架。其核心思想在於使用一些非監督性指標對對抗樣本在 Deepfake 中所生成的結果進行敏感性評估,並且利用人臉屬性區域作爲輔助信息以及通過對最優防禦方法進行搜索組合的方式檢測和重建圖片,以期能夠達到淨化原圖並保持 Deepfake 輸出真實性的目的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"論文顯示,騰訊安全研究員選取了 Deepfake 中較爲重要的三個任務進行攻防實驗,分別爲 "},{"type":"text","marks":[{"type":"strong"}],"text":"換臉、人臉屬性修改以及表情變換。給原圖增加噪聲後,所產生的對抗樣本儘管對原圖進行了修改,但修改的程度明顯低於人眼可察覺的水平"},{"type":"text","text":","},{"type":"text","marks":[{"type":"strong"}],"text":"而 Deepfake 模型產生的深度僞造視頻卻已經崩壞,無法以假亂真,其對 Deepfake 帶來影響是災難性的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但當改爲通過 MagDR 框架進行處理時,情況發生了變化。"},{"type":"text","marks":[{"type":"strong"}],"text":"該模型首先對視頻中的對抗攻擊擾動進行檢測,提醒 Deepfake 的使用者,所用圖片或視頻大概率存在對抗攻擊的情況,然後通過重建視頻模型,有效地將攻擊者注入的對抗擾動進行消除,從而實現 Deepfake 模型相關係統的正常使用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/b1\/b1ec1186724ccdf3cd905d54a1a55ebc.jpeg","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MagDR 框架不僅能夠消除對抗擾動帶來的破壞性影響,同時還保留了原圖的各種像素細節,進而保證了重建後的 Deepfake 結果與原圖結果一致。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/a8\/a8233f986fae5102ea33ebc8c687641c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這一發現表明,原先業界主流的主動性防禦的方法(Deepfake 對抗擾動)不再可靠,爲了避免社交網絡上人臉照片被惡意使用,還需要找到更佳的 Deepfake 防禦方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同時,騰訊 Blade Team 研究員也在此發現的基礎上提出了安全建議,比如可以生成特定的對抗擾動,使得產生出的崩壞效果受到限制,更加真實以繞過目前 MagDR 的檢測,或者說產生更難以被重建模塊消除的魯棒性對抗擾動。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究員同時提出,"},{"type":"text","marks":[{"type":"strong"}],"text":"希望大家可以對 MagDR 的組件或者整體結構進行調整與創新,以其作爲新思路的創新點,產生出更爲強大的防禦框架"},{"type":"text","text":","},{"type":"text","marks":[{"type":"strong"}],"text":"從而防止 Deepfake 的惡意濫用,進一步地加強用照片或視頻的安全性。"},{"type":"text","text":"技術在不斷進步,只有“用 AI 對抗 AI”,才能讓技術的安全應用走得更遠。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Deepfake 被頻頻濫用,這次有救了?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"合成圖像和視頻生成是計算機視覺一個不斷髮展的子領域,隨着 2014 年生成對抗網絡(Generative adversarial networks,GAN)的引入,這一領域得到了很大的發展。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Deepfake 是一種利用深度學習算法合成虛假圖像、視頻等技術的統稱,最早由一位 Reddit 用戶在 2017 年創造出來,他使用該技術製作名人的假色情作品。在隨後幾年的發展中,Deepfake 經常與用戶隱私安全問題扯在一起。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019 年,一位程序員出於娛樂目的利用 Deepfake 開發出的 DeepNude 應用讓不少人直呼“節操掉了一地”。在這個 App 上,只要上傳一張女性的照片,在神經網絡技術的幫助下,DeepNude 可以自動“脫掉”女性身上的衣服,顯示出裸體。令作者萬萬沒想到的是,這款應用竟在短時間內得到了如此大規模的病毒式傳播。試想,一旦 DeepNude 大規模濫用,對於女性的隱私權、名譽權將會帶來巨大損害。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所幸,因爲飽受爭議,作者隨後關閉了 DeepNude。不過有人認爲直接下架應用破壞了信息自由傳播的理念,他便收集了原應用的神經網絡框架和訓練模型,在 GitHub 上新建了一個開源的 DeepNude 項目。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GitHub 地址:"},{"type":"link","attrs":{"href":"https:\/\/github.com\/yuanxiaosc\/DeepNude-an-Image-to-Image-technology\/blob\/master\/README-ZH.md","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/yuanxiaosc\/DeepNude-an-Image-to-Image-technology\/blob\/master\/README-ZH.md"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同年,ZAO 推出之時,其用戶協議、隱私政策和版權說明的相關規定,被指有過濫收集用戶信息和侵犯版權的嫌疑。不友好的用戶協議更使得該應用一夜之間從刷屏變成遭萬人指責。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當時“ZAO”的一條授權協議指出:用戶上傳發布內容後,即意味着同意授予 ZAO 及其關聯公司以及 ZAO 用戶在“全球範圍內完全免費、不可撤銷、永久、可轉授權和可再許可的權利”,“包括但不限於可以對用戶內容進行全部或部分的修改與編輯(如將短視頻中的人臉或者聲音換成另一個人的人臉或者聲音等)以及對修改前後的用戶內容進行信息網絡傳播以及《著作權法》規定的由著作權人享有的全部著作財產權利及鄰接權利”。因安全性存疑,該用戶協議被發現後立刻引發爭議,雖然 ZAO 後來悄悄修改了其用戶協議,以迴應用戶對其隱私泄露的質疑,但仍未能消除用戶對它的擔憂。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"再到今年刷屏社交網絡、讓衆多大佬齊唱“螞蟻呀嘿”的 Avatarify,只用了不到一週的時間就從應用市場下架。Deepfake 技術被頻頻濫用已經引發了用戶的強烈擔憂,這也招致了部分用戶的強烈抵制,很多科技公司也在尋找更加強大的 Deepfake 對抗擾動方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019 年 12 月,爲了應對 AI 換臉技術濫用的情況,微軟亞洲研究院提出了一種檢測僞造人臉圖像的方法——Face X-Ray,能夠檢測複雜的僞造人臉圖像,這一成果後來入選了 CVPR 2020。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如今,騰訊的 MagDR 框架也給出了一個可行的新思路,期待更多開發者可以以此爲基礎有更多好的想法來阻止 Deepfake 被用於不當之地。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"關於 Deepfake 的立法監管已有進展"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了技術上的進展,各國也在監管層面加大力度。雖然 Deepfake 換臉技術是否侵權是一個比較難界定的問題,但是不同的國家和地區還是加強了監管。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此前,美國弗吉尼亞州正式宣佈擴大復仇色情法,嚴禁經過“深度僞造”的內容,包括製作或操縱的視頻和使用機器學習製作的圖像等,利用 Deepfake 製作的黃色圖像亦包括在內。如果違反該規則屬於第一類輕罪,最高可判 12 個月的監禁,罰款額高達 2500 美元。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據法律規定,未經許可分享某人視頻的裸照照片是違法的 ,無論它是真實的還是假的。審查該法律的委員會將重點關注復仇色情和網絡傳輸色情內容,其中包括利用藍牙等允許基於鄰近文件共享的技術將未經請求的性圖像發送到人的手機上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"英國政府也在討論一項法律,該法律專門涉及製作和分享非自願的親密圖像,以應對濫用和冒犯性數字產品。討論該法律的委員會將重點關注復仇性質的色情視頻,以及利用 Deepfake 算法生成的色情內容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"去年,兩會授權民法典正式文本發佈。民法典人格權編中明確規定了不得用技術手段僞造等方式侵害他人肖像權,矛頭直接指向 AI 換臉、變聲。我國新頒佈的民法典中第 1019 條規定:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"任何組合或者個人不得以醜化、污損,或者利用信息技術手段僞造等方式侵害他人的肖像權。未經肖像權人同意,不得製作、使用、公開肖像權人的肖像,但是法律另有規定的除外。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"未經肖像權人同意,肖像作品權利人不得以發表、複製、發行、出租、展覽等方式使用或公開肖像權人的肖像。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而在此之前,我國的肖像權保護主要參照《民法通則》第一百條規定:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"公民享有肖像權,未經本人同意,不得以營利爲目的使用公民的肖像。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"寫在最後"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019 年,深度學習專家 Yann LeCun 曾在自己的推特上反思,要是他早先能夠預料到卷積神經網絡 (CNN) 會被濫用,當初還該不該開源 CNN 呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果因爲擔憂技術風險而拒絕一切革命性技術的誕生,那麼可能也就失去了研發 AI 的意義。技術本無錯,只是被用錯了地方。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"團隊介紹:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Tencent Blade Team 由騰訊安全平臺部成立,專注於人工智能、移動互聯網、物聯網、雲虛擬化等前沿技術領域的前瞻安全技術研究,目前已向 Apple、Amazon、Google、Microsoft、Adobe 等諸多國際知名公司報告並協助修復了 200 多個安全漏洞。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章