DeepMind科学家:强化学习足以满足通用AI需求

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"size","attrs":{"size":10}},{"type":"strong"}],"text":"本文是我们对"},{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/tag\/ai-research-papers\/?fileGuid=qgprgqTXvgQxwTXJ","title":"","type":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"AI研究论文的评论文章"}],"marks":[{"type":"italic"},{"type":"size","attrs":{"size":10}},{"type":"strong"}]},{"type":"text","marks":[{"type":"italic"},{"type":"size","attrs":{"size":10}},{"type":"strong"}],"text":"之一,这个系列主要探索人工智能领域的最新发现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在创造人工智能的长达数十年的旅途中,计算机科学家设计并开发了各种复杂的机制和技术来复制视觉、语言、推理、运动技能和其他与智慧生命相关的能力。虽然这些努力已经带来了可以在有限环境中有效解决特定问题的AI系统,但他们还没有开发出见于人类和动物中的那种通用智能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/57\/7f\/57ca0a510aab2be7791d6d3c55769f7f.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在《人工智能》期刊提交给同行评审的一篇新论文中,英国人工智能实验室DeepMind的科学家认为,智能及其相关能力不是通过形成和解决复杂问题而产生的,而是源于长期遵循一个简单而强大的原则:奖励最大化."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这篇题为“"},{"type":"link","attrs":{"href":"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0004370221000862?fileGuid=qgprgqTXvgQxwTXJ","title":"","type":null},"content":[{"type":"text","text":"奖励就够了"}]},{"type":"text","text":"”的论文(在本文撰写时仍处于预证明阶段)从自然智能进化的相关研究以及人工智能的最新成就中汲取了灵感。作者认为,奖励最大化和试错经验足以培养出可表现与智力相关能力的行为。由此他们得出结论,强化学习这一基于奖励最大化理念的人工智能分支,可以引领通用人工智能的发展。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AI的两条路径"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/97\/ea\/972401287b5c63bca2b4dd2d061220ea.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"创建AI的一种常见方法是尝试在计算机中复制智能行为的元素。例如,我们对哺乳动物视觉系统的理解催生了各种视觉人工智能系统,这些系统可以对图像分类、定位照片中的对象、定义对象之间的边界等等。同样,我们对语言的理解有助于开发各种自然语言处理系统,例如问答、文本生成和机器翻译等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这些都是狭义人工智能的实例,这些系统旨在执行特定任务,不具备解决一般问题的能力。一些科学家认为,拼装多个狭义的人工智能模块会制成更高级别的智能系统。例如,你可以发展一个软件系统,其综合运用单独的计算机视觉、语音处理、NLP和电机控制模块,以解决需要多种技能的复杂问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeepMind研究人员提出的另一种创建人工智能的方法,是重新创建产生自然智能的简单而有效的规则。研究人员写道:“[我们]考虑了一个替代假设:最大化奖励的一般化目标足以驱动表现出自然和人工智能研究领域中大部分(如果不是全部)能力的行为。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这基本上就是大自然的机制。从科学角度来看,在我们周围看到的复杂有机体中并没有自上而下的智能设计。数十亿年的自然选择和随机变异过滤出了种种适合生存和繁殖的生命形式。能够更好地应对环境中各类挑战和情况的生物得以生存和繁衍下去,其余的都被淘汰了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这种简单而有效的机制推动了具有各种感知、导航、改变环境和相互交流技能和能力的生物的进化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人员说:“动物和人类面临的自然世界,以及人工代理未来将面对的环境,本质上是非常复杂的,需要复杂的能力才能在这些环境中取得成功(例如生存)。”“因此,以奖励最大化来衡量的成功需要各种与智力相关的能力。在这样的环境中,任何使奖励最大化的行为都必须表现出这些能力。从这个意义上说,奖励最大化的一般化目标包含了许多甚至可能是所有的智能目标。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如,考虑一只寻求减少饥饿这一奖励的松鼠。一方面,它的感官和运动技能帮助它在有食物时定位和收集坚果。但当食物变得稀缺时,只能找到食物的松鼠必然会饿死。这就是为什么它也具备计划技能和记忆来收集坚果,并在冬天存储它们。松鼠具有社交技能和知识,可以确保其他动物不会偷吃它的坚果。宏观来看,减少饥饿可能是“活下去”的一个子目标,后者还需要其他一些技能,例如发现和躲避危险动物、保护自己免受环境威胁以及寻找应对季节变迁的更好栖息地。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人员写道:“当与智力相关的能力作为奖励最大化的单一目标的解决方案出现时,实际上可能让我们获得了更深入的理解,因为它解释了这种能力为什么会出现。”“相比之下,当每个能力都被理解为针对特定目标的解决方案时,'为什么'这个问题就被回避了,重心都放在了这个能力能'做什么'的问题上。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后,研究人员认为,“最通用和可扩展”的最大化奖励路径是让智能体与环境交互来不断学习。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"通过最大化奖励发展各种能力"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/65\/d2\/6549378914e4d2d8e58b3b6c647c71d2.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在论文中,人工智能研究人员提供了一些高层示例,说明“在最大化许多可行的奖励信号之一的服务中,智能和相关能力将如何隐式地浮现,并如何对应自然或人工智能领域可能探索的许多实用目标。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如,感官技能服务于在复杂环境中生存的需要。对象识别使动物能够检测食物、猎物、朋友和威胁,或找到路径、庇护所和栖息地。图像分割使它们能够分辨不同对象之间的差异,避免致命错误,例如从悬崖上掉下来或从树枝上掉下来。同时,听觉有助于动物发现它们看不到的威胁,或者找到伪装起来的猎物。触觉、味觉和嗅觉也赋予动物在栖息地更丰富的感官体验,和在危险环境中更多生存机会的优势。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"奖励和环境也塑造了动物与生俱来的知识。例如,由狮子和猎豹等掠食性动物统治的危险栖息地会奖励反刍动物,让它们自出生以来就具备逃避威胁的先天知识。同时,动物也因其学习栖息地特定知识的能力而获得奖励,例如在哪里可以找到食物和住所。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人员还讨论了语言、社交智能、模仿以及最终一般化智能的奖励驱动基础,他们将其描述为“在单一、复杂的环境中最大化单一奖励”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这里,他们类比了自然智能和AGI:“动物的经验流足够丰富和多样,它可能需要灵活的能力来实现各种各样的子目标(例如觅食、战斗或逃跑),以便成功地最大化其整体奖励(例如饥饿或繁殖)。类似地,如果一个人工智能代理的经验流足够丰富,那么许多目标(例如电池寿命或生存)可能隐式地需要实现同样广泛的子目标的各种能力,因此奖励最大化应该足以产生一种通用人工智能。”"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"奖励最大化的强化学习"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/80\/a0\/80d575d8cyyc7e9ab863172858fa4ba0.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"强化学习是人工智能算法的一个特殊分支,由三大关键要素组成:环境、代理和奖励。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过执行动作,代理会改变自己和环境的状态。根据这些动作对代理必须实现目标的影响程度,代理会受到奖励或惩罚。在许多强化学习问题中,智能体没有环境的初始知识,并从随机动作起步。根据收到的反馈,代理学习如何调整其行为,并制定最大化其奖励的策略。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在他们的论文中,DeepMind的研究人员建议将强化学习作为主要算法,它可以复制自然界中体现的奖励最大化机制,并最终走向通用人工智能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人员写道:“如果一个智能体可以不断调整其行为以提高其累积奖励,那么它所处环境反复要求的任何能力最终都必须产生在智能体的行为中”。研究人员补充说,在最大化奖励的过程中,一个好的强化学习代理最终可以学会感知、语言、社交智能等能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在论文中,研究人员提供了几个例子,展示了强化学习代理如何在游戏和机器人环境中学习一般技能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而研究人员也强调,一些根本性的挑战仍未解决。例如,他们说,“我们不对强化学习代理的样本效率提供任何理论保证。”强化学习以需求海量数据而闻名。例如,强化学习代理可能需要几个世纪的游戏时间才能掌握某款计算机游戏的玩法。人工智能研究人员仍不知道如何才能创建出能将学习推广到多个领域的强化学习系统。因此,环境的微小变化往往也需要对模型进行全面重新训练。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"研究人员还承认,奖励最大化的学习机制是一个尚待解决的问题,也仍然是强化学习中有待进一步研究的核心问题。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"奖励最大化的优点和缺点"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/ce\/e6\/ce54967f3235d1425f3c3fb06f1bc9e6.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"加州大学圣地亚哥分校的神经科学家、哲学家和名誉教授Patricia Churchland将论文中的想法称为“非常谨慎和有深度的解决方案”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然而,Churchland指出,该论文关于社会决策的讨论可能存在缺陷。DeepMind研究人员专注于社交互动中的个人收益。Churchland最近写了一本关于道德直觉的生物学起源的书,他认为依恋和联系是影响哺乳动物和鸟类社会决策的一个强大因素,这就是为什么动物会为了保护它们的孩子而将自己置于极大的危险之中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Churchland说:“我倾向于将亲密关系以及其他人的关怀视为自我的延伸——‘由我及人’(me-and-mine)。”“在这种情况下,我认为对[论文]假设进行小幅修改以实现对'由我及人'的奖励最大化会非常有效。当然,我们群居动物都有依恋水平——对后代的依恋水平超强,对配偶亲人很强,对朋友和熟人很强等等。依恋类型的强弱会因环境和发育阶段而异。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Churchland说,这不是一条主要批评意见,并且很可能会非常优雅地融入这个假设。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“我对论文中的细节水平以及他们对可能弱点考虑的仔细程度印象深刻。”“我可能错了,但我倾向于认为这是一个里程碑。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据科学家Herbert Roitblat对该论文的立场——即简单的学习机制和试错经验足以培养与智能相关的能力——提出了挑战。Roitblat认为,论文中提出的理论在现实生活中实施时面临着一些挑战。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“如果没有时间限制,那么试错学习可能就足够了。但存在时间限制的话,我们就会遇到无限数量的猴子在无限长的时间内打字的问题,”Roitblat说。无限猴子定理指出,一只猴子在无限长的时间内随机敲打打字机上的按键,最终可能会打出任何给定的文本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Roitblat是《算法还不够》一书的作者,他在其中解释了为什么当前所有的AI算法,包括强化学习,都需要仔细规划人类创建的问题和表示。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“一旦建立了模型及其内在表示,优化或强化过程就可以引导模型进化,但这并不意味着单纯强化就足够了,”Roitblat说。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同样,Roitblat补充说,该论文没有就如何定义强化学习的奖励、动作和其他元素提出任何建议。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“强化学习假设智能体具有一组有限的潜在动作。奖励信号和价值函数也已经指定就绪。换句话说,通用智能的问题恰恰是如何提供被强化学习作为先决条件的那些东西,”Roitblat说。“因此,如果机器学习都可以简化为某种形式的优化策略,以最大化某些评估措施,那么强化学习肯定是有效的,但这种解释的说服力并不是很强。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/bdtechtalks.com\/2021\/06\/07\/deepmind-artificial-intelligence-reward-maximization\/?fileGuid=qgprgqTXvgQxwTXJ","title":"","type":null},"content":[{"type":"text","text":"https:\/\/bdtechtalks.com\/2021\/06\/07\/deepmind-artificial-intelligence-reward-maximization\/"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章