DeepMind提出強化學習新方法,可實現人機合作

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文來自BDTechTalks網站的“"},{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/tag\/ai-research-papers\/","title":null,"type":null},"content":[{"type":"text","text":"AI研究論文評論"}]},{"type":"text","text":"”專欄。該專欄提供人工智能最新發現的系列解讀文章。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"儘管人工智能研究人員正力圖建立能在圍棋、星際爭霸到Dota等複雜遊戲中擊敗人類專家的強化學習系統,但如何創建出能與人類開展合作而非競爭的強化學習系統是人工智能正面臨的更大挑戰。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在一篇由DeepMind的人工智能研究人員"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2110.08176","title":null,"type":null},"content":[{"type":"text","text":"最新預發佈的論文"}]},{"type":"text","text":"中,提出了一種稱爲FCP(Fictitious Co-Play,虛擬合作)的新方法。該方法實現智能體與不同技能水平人類間的合作,無需人工生成數據訓練強化學習智能體(agent)。論文已被今年的NIPS會議接收。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"論文通過使用一款稱爲Overcooked的解謎遊戲進行測試,結果表明在與人類玩家的組隊合作中,FCP方法創建的強化學習智能體表現更優,混淆度最低。論文結果可爲進一步研究人機協作系統提供重要方向。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"訓練強化學習智能體"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2021\/09\/02\/deep-reinforcement-learning-explainer\/","title":null,"type":null},"content":[{"type":"text","text":"強化學習"}]},{"type":"text","text":"可持續無休地學習任何具有明確獎勵(award)、動作(action)和狀態(state)的任務。只要具備足夠的計算能力和時間,強化學習智能體可根據所在的環境(environment)去學習出一組動作序列或“策略”,以實現獎勵(award)的最大化。強化學習在"},{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2018\/07\/02\/ai-plays-chess-go-poker-video-games\/","title":null,"type":null},"content":[{"type":"text","text":"玩遊戲"}]},{"type":"text","text":"中的有效性,已得到很好的證明。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但強化學習智能體給出的遊戲策略通常並不能很好地匹配真人隊友的玩法。一旦組隊合作,智能體執行的操作會令真人隊友大感困惑。由此,強化學習難以應用於需各方參與者協同規劃和分工的場景。如何彌合機器智能與真人玩家間存在的鴻溝,是人工智能社區正面對的一個重要挑戰。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人員正致力於創建各種強化學習智能體,達到能適應包括其它強化學習智能體和人類在內的各合作方的行爲習慣。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/90\/dd\/90cf9f120eef3949895989a9d153e2dd.jpeg","alt":null,"title":"圖1 強化學習智能體的多種訓練方法","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SP方法(self-play,左右互搏法)是遊戲使用的經典強化學習訓練方法。該方法讓強化學習智能體與自身的一個副本持續對戰,能非常高效地學習出實現遊戲獎勵最大化的策略。但該方法的問題在於,所生成的強化學習模型會過擬合智能體自身的遊戲玩法,導致完全無法與使用其他方法訓練的玩家合作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另一種訓練方法是PP方法 (popuation play,羣體參與法),它在強化學習智能體訓練中引入了多種具有不同參數和結構的隊友模型。儘管在與真人玩家合作的競技遊戲中,PP方法要明顯地優於SP方法,但其依然缺乏應對“共同收益”(common-payoff)場景下的多樣性(diversity)問題。“共同收益”指玩家必須協同解決問題,並根據環境變化去調整合作策略。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三種方法稱爲BCP方法(behavioral cloning play,行爲克隆法),它使用人工生成的數據訓練強化學習智能體。有別於在環境中隨機選取起始點,BCP方法根據採集自真人玩家的遊戲數據去調整模型參數,使智能體生成更接近於人類玩家遊戲模式的行爲。如果可以採集具有不同技能水平和遊戲風格玩家的數據,那麼智能體就能更靈活地適應隊友的行爲,更有可能與真人玩家很好地配合。然而BCP方法的挑戰性在於如何獲取真人數據,特別是考慮到要使強化學習模型達到最佳設置,通常所需的遊戲量是人工所無法企及的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"FCP方法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeepMind新提出的強化學習FCP方法,其關鍵理念是在無需依賴於人工生成數據的情況下,創建可與具有不同風格和技能水平玩家協作的智能體。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FCP方法的訓練分爲兩個階段。首先,DeepMind研究人員創建了一組使用SP方法的強化學習智能體,分別在不同的初始條件下獨立完成訓練,使模型收斂於不同的參數設置,由此創建了一個多樣化的強化學習智能體池。爲實現智能體池中技能水平的多樣化,研究人員保存了每個智能體在不同訓練階段的快照。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正如論文所述,“最後一個檢查點表示的是一個經完全訓練的‘熟練’玩家,而較早的檢查點則代表技能尚不純熟的玩家。需說明的是,使用多個檢查點實現各個玩家技能的多樣性,這並不會導致的額外訓練成本。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個階段使用池中所有的智能體,訓練出一個新的強化學習模型。新智能體必須達成策略上的調優,才能實現與具有不同參數值和技能水平的隊友開展協同。論文提出,“FCP智能體完全可以達到跟隨真人玩家帶隊,在給定範圍的策略和技能中去學習出一個通用的策略。”"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"測試FCP"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeepMind的人工智能研究人員將FCP方法應用於解謎遊戲Overcooked。遊戲玩家在網格化場景中移動,與物體互動,執行一系列步驟,最終完成烹飪和送餐任務。Overcooked的遊戲邏輯簡單,並需要隊友間的協作和工作分配,因此非常適合測試。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲測試FCP方法,DeepMind研究人員簡化了完整的Overcooked遊戲任務。他們精心挑選了一組具有多種挑戰的地圖,包括強制協作和受限空間等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/i1.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2021\/11\/Overcooked-simplified-environment-1024x576.jpg","alt":null,"title":"圖2 DeepMind使用簡化版Overcooked測試FCP方法","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人員分別訓練了一組SP、PP、BCP和FCP智能體。爲了比較各方法的性能,他們首先組了三個隊,分別測試每種強化學習智能體類型,即基於人類遊戲數據訓練的BCP模型、在不同技能水平上訓練的SP智能體,以及代表低水平玩家的隨機初始化智能體。測試根據在相同數量劇集中所能提供的餐食數,衡量各方法的性能優劣。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"結果表明,FCP方法的表現要明顯優於其他強化學習智能體訓練方法,可以很好地泛化各種技能水平和遊戲風格。出乎意料的是,測試進一步表明了其他訓練方法是非常脆弱的。正如論文所述,“這意味着其他方法可能無法達到與技能水平一般的玩家組隊。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/i0.wp.com\/bdtechtalks.com\/wp-content\/uploads\/2021\/11\/FCP-compared-to-other-RL-methods.jpg","alt":null,"title":"圖3 對於強化學習智能體訓練,FCP方法優於其他方法","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"論文進而測試了每種類型的強化學習智能體在與人類玩家合作中的表現。研究人員開展了有114名人類玩家參加的在線研究,其中每位玩家參與20輪遊戲。在每一輪遊戲中,玩家與其中一種強化學習智能體組隊,但並不知道該智能體的具體類型,隨機進入一個廚房場景。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據實驗結果,“人類-FCP”組隊的性能,要優於其他所有“人類-強化學習智能體”組隊。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每兩輪遊戲後,參與玩家根據與強化學習智能體組隊的體驗,給出一個1到5之間的評分。相對其他智能體,參與玩家對FCP隊友表現出明顯的偏好。反饋表明,FCP智能體的行爲更加連貫、更好預測,適應性更強。例如,強化學習智能體似乎具備了感知隊友行爲的能力,在每個烹飪場景中選擇了特定角色,避免相互產生混淆。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"與之相比,其他強化學習智能體的行爲則被測試參與者描述爲“混亂無章,難以合作”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/45\/85\/45f52a97f83f1c8b925ea50efba96485.jpeg","alt":null,"title":"圖4 DeepMind使用各種強化學習智能體與人類玩家組隊","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"下一步工作"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在論文中,研究人員也指出了該工作的一些侷限性。例如,在FCP智能體的訓練中,只使用了32個強化學習合作隊友。儘管這完全可應對簡化版的Overcooked遊戲,但應用於更復雜的環境時可能會受限。DeepMind研究人員指出,“對於更復雜的遊戲,爲表示足夠多樣化的策略,FCP所需合作伙伴的總體規模可能難以企及。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2021\/06\/07\/deepmind-artificial-intelligence-reward-maximization\/","title":null,"type":null},"content":[{"type":"text","text":"獎勵定義"}]},{"type":"text","text":"是限制FCP應用於更復雜環境的另一個挑戰。在簡化版Overcooked中,獎勵是簡單而且明確的。但在其他環境中,強化學習智能體在獲得主要獎勵前,必須去完成一些子目標。而智能體實現子目標的方式,必須要與人類合作伙伴的方式保持一致。這在缺少人類數據的情況下,是很難去評估和調優的。研究人員提出,“如果任務的獎勵函數與人類處理任務的方式非常不一致,那麼和所有缺少人類數據的方法一樣,該方法同樣很可能會生成非最優的合作伙伴。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeepMind的研究可歸爲人機協作領域研究。在"},{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2021\/11\/01\/reinforcement-learning-hanabi\/","title":null,"type":null},"content":[{"type":"text","text":"麻省理工學院科學家的一項最新研究"}]},{"type":"text","text":"中,探索了強化學習智能體在與真人玩家玩紙牌遊戲Hanabi中的侷限性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeepMind提出的強化學習新技術,在彌合人類和人工智能間鴻溝上取得了進步。研究人員希望其“能爲未來研究人機協作造福社會這一重要挑戰奠定堅實的基礎。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2021\/11\/22\/deepmind-reinforcement-learning-fictitious-coplay\/","title":null,"type":null},"content":[{"type":"text","text":"DeepMind RL method promises better co-op between AI and humans"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章