谷歌发布生态系统RLDS,可在强化学习中生成、共享和使用数据集

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"大多数"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Reinforcement_learning","title":null,"type":null},"content":[{"type":"text","text":"强化学习"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Sequential_decision_making","title":null,"type":null},"content":[{"type":"text","text":"序列决策算法"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"都需要智能体与环境的大量交互生成训练数据,以获得最佳性能。这种方法效率很低,尤其是在很难做到这种交互的情况下,比如用真实的机器人来收集数据,或者和人类专家进行交互。要缓解这个问题,可以重用外部的知识源,比如 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/deepmind-research\/tree\/master\/rl_unplugged#atari-dataset","title":null,"type":null},"content":[{"type":"text","text":"RL Unplugged Atari 数据集"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",其中包括玩 Atari 游戏的合成智能体的数据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"但是,由于这些数据集非常少,而且序列决策生成数据的任务和方式多种多样(例如,专家数据或噪声演示,人类或合成交互,等等),因此,整个社区要用一组很少的、具有代表性的数据集进行工作,就不太现实,甚至不可取。另外,有些数据集被发行成仅适合特定算法的形式,因此研究者不能重用这些数据集。比如,某些数据集并没有包含与环境的交互序列,但却提供了一组让我们无法重构其时间关系的随机交互,其他数据集则会以稍有差异的方式发行,从而导致细微的误差,非常难以识别。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"基于此,我们提出了"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2111.02767","title":null,"type":null},"content":[{"type":"text","text":"强化学习数据集"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(Reinforcement Learning Datasets,RLDS),并发布了一套用于记录、重放、操作、注释和共享数据的"},{"type":"link","attrs":{"href":"http:\/\/github.com\/google-research\/rlds","title":null,"type":null},"content":[{"type":"text","text":"工具"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",用于序列决策制定,其中包括"},{"type":"link","attrs":{"href":"https:\/\/ai.googleblog.com\/2020\/08\/tackling-open-challenges-in-offline.html","title":null,"type":null},"content":[{"type":"text","text":"离线强化学习"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"、"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Apprenticeship_learning","title":null,"type":null},"content":[{"type":"text","text":"学徒学习"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(Apprenticeship learning)或"},{"type":"link","attrs":{"href":"https:\/\/ai.googleblog.com\/2020\/09\/imitation-learning-in-low-data-regime.html","title":null,"type":null},"content":[{"type":"text","text":"模仿学习"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(imitation learning)。RLDS 可以方便地共享数据集,而不会损失任何信息(比如,保持交互的序列,而非随机化),而且独立于底层原始格式,从而允许用户在更广泛的任务上对新的算法进行快速测试。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"另外,RLDS 提供了收集由合成智能体("},{"type":"link","attrs":{"href":"http:\/\/github.com\/deepmind\/envlogger","title":null,"type":null},"content":[{"type":"text","text":"EnvLogger"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":")或人类("},{"type":"link","attrs":{"href":"http:\/\/github.com\/google-research\/rlds-creator","title":null,"type":null},"content":[{"type":"text","text":"RLDS Creator"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":")生成的数据的工具,以及对收集到的数据进行检查与处理的工具。最后,通过与 "},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/datasets","title":null,"type":null},"content":[{"type":"text","text":"TensorFlow Dataset"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(TFDS)集成,有助于加强与研究界共享强化学习数据集。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"通过 RLDS,用户可以将智能体与环境的交互以无损、标准的格式进行记录。他们可以利用并转换这些数据,供不同的强化学习或序列决策算法使用,或者进行数据分析。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"数据集结构"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"强化学习、离线强化学习或模仿学习中的算法,都有可能会使用格式完全不同的数据,并且,当数据集的格式不清楚时,很容易导致由于对底层数据的误解引起的 bug。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"RLDS 通过定义数据集的每个字段的内容和意义,使数据格式显式化,并为其提供了重新对齐和转换的工具,以适应任何算法实现所需的格式。为了定义数据格式,RLDS 利用了强化学习数据集固有的标准结构,也就是"},{"type":"link","attrs":{"href":"https:\/\/commons.wikimedia.org\/wiki\/File:Reinforcement_learning_diagram.svg#\/media\/File:Reinforcement_learning_diagram.svg","title":null,"type":null},"content":[{"type":"text","text":"智能体和环境之间的交互(步骤)"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"的序列(情节),其中,智能体可以是基于规则的\/自动化控制器、正式规划者、人类、动物,或上述的组合。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这些步骤中的每一个都包含当前的观察、应用于当前观察的行动、作为应用行动的结果而获得的奖励以及与奖励一起获得的"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Q-learning#Discount_factor","title":null,"type":null},"content":[{"type":"text","text":"折扣"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"。步骤还包括额外的信息,以表明该步骤是该情节的第一个还是最后一个,或者该观察是否对应于一个终端状态。每个步骤和情节还可以包含自定义的元数据,可用于存储与环境相关或与模型相关的数据。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"生成数据"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"研究人员通过记录任何类型的智能体"},{"type":"link","attrs":{"href":"https:\/\/commons.wikimedia.org\/wiki\/File:Reinforcement_learning_diagram.svg#\/media\/File:Reinforcement_learning_diagram.svg","title":null,"type":null},"content":[{"type":"text","text":"与环境的交互"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"来产生数据集。为了保持其有用性,原始数据最好以无损格式存储,记录所有生成的信息,并保留数据项之间的时间关系(例如,步骤和事件的序列),而不会对将来如何利用数据集作出任何假定。为了这个目的,我们发行了 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/envlogger","title":null,"type":null},"content":[{"type":"text","text":"EnvLogger"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",这是一个软件库,以"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Open_format","title":null,"type":null},"content":[{"type":"text","text":"开放文档格式"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"记录智能体与环境的交互。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"EnvLogger 是一种环境包装器,可以将智能体与环境的交互记录下来,并将它们存储在一个较长的时间内。虽然 EnvLogger 无缝地集成在 RLDS 生态系统中,但是我们将其设计为可作为一个独立的库使用,以提高模块化程度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"与大多数机器学习环境一样,为强化学习收集人类数据是一个既费时又费力的过程。解决这个问题的常见方法是使用众包,它要求用户能够轻松地访问可能难以扩展到大量参与者的环境。在 RLDS 生态系统中,我们发行了一个基于 Web 的工具,名为 "},{"type":"link","attrs":{"href":"http:\/\/github.com\/google-research\/rlds-creator","title":null,"type":null},"content":[{"type":"text","text":"RLDS Creator"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",该工具可以通过浏览器为任何人类可控制的环境提供一个通用接口。用户可以与环境进行交互,例如,在网上玩 Atari 游戏,交互会被记录和存储,以便以后可以通过 RLDS 加载回来,用于分析或训练智能体。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"共享数据"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"数据集的制作往往很烦琐,与更广泛的研究社区共享,不仅可以重现之前的实验,还可以加快研究速度,因为它更容易在一系列场景中运行和验证新算法。为此,RLDS 与 "},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/datasets","title":null,"type":null},"content":[{"type":"text","text":"TensorFlow Datasets"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(TFDS)集成,后者是一个现有的机器学习社区内共享数据集的库。一旦数据集成为 TFDS 的一部分,它就会被索引到全球 TFDS 目录中,这样,所有研究人员都可以通过使用 tfds.load(name_of_dataset) 来访问,并且可以将数据以 TensorFlow 或 "},{"type":"link","attrs":{"href":"https:\/\/numpy.org\/","title":null,"type":null},"content":[{"type":"text","text":"Numpy"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 格式加载。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"TFDS 独立于原始数据集的底层格式,所以,任何具有 RLDS 兼容格式的现有数据集都可以用于 RLDS,即使它最初不是用 EnvLogger 或 RLDS Creator 生成的。另外,使用 TFDS,用户对自己的数据拥有所有权和完全控制权,并且所有的数据集都包含了一个引用给数据集作者。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"使用数据"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"研究人员可以使用这些数据集对各种机器学习算法进行分析、可视化或训练,就像上面提到的那样,这些算法可能会以不同的格式使用数据,而不是以不同的格式存储数据。例如,一些算法,如 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/acme\/tree\/master\/acme\/agents\/tf\/r2d2","title":null,"type":null},"content":[{"type":"text","text":"R2D2"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 或 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/acme\/tree\/master\/acme\/agents\/tf\/r2d3","title":null,"type":null},"content":[{"type":"text","text":"R2D3"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",使用完整的情节;而另一些算法,如 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/acme\/tree\/master\/acme\/agents\/jax\/bc","title":null,"type":null},"content":[{"type":"text","text":"Behavioral Cloning"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(行为克隆)或 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/acme\/tree\/master\/acme\/agents\/jax\/value_dice","title":null,"type":null},"content":[{"type":"text","text":"ValueDice"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",则使用成批的随机步骤。为了实现这一点,RLDS 提供了一个强化学习场景的转换库。由于强化学习数据集的嵌套结构,所以这些转换都经过了优化,包括了自动批处理,从而加速了其中一些操作。使用这些优化的转换,RLDS 用户有充分的灵活性,可以轻松实现一些高级功能,而且开发的管道可以在 RLDS 数据集上重复使用。转换的示例包含了对选定的步骤字段(或子字段)的全数据集的统计,或关于情节边界的灵活批处理。你可以在这个"},{"type":"link","attrs":{"href":"https:\/\/github.com\/google-research\/rlds\/blob\/main\/rlds\/examples\/rlds_tutorial.ipynb","title":null,"type":null},"content":[{"type":"text","text":"教程"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"中探索现有的转换,并在这个 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/google-research\/rlds\/blob\/main\/rlds\/examples\/rlds_examples.ipynb","title":null,"type":null},"content":[{"type":"text","text":"Colab"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 中看到更复杂的真实示例。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"可用数据集"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"目前,"},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/datasets\/catalog\/overview","title":null,"type":null},"content":[{"type":"text","text":"TFDS"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 中有以下数据集(与 RLDS 兼容):"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"带有 "},{"type":"link","attrs":{"href":"https:\/\/gym.openai.com\/envs\/#mujoco","title":null,"type":null},"content":[{"type":"text","text":"Mujoco"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 和 "},{"type":"link","attrs":{"href":"https:\/\/sites.google.com\/corp\/view\/deeprl-dexterous-manipulation","title":null,"type":null},"content":[{"type":"text","text":"Adroit"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 任务的 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/rail-berkeley\/d4rl","title":null,"type":null},"content":[{"type":"text","text":"D4RL"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 的子集;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/deepmind-research\/tree\/master\/rl_unplugged","title":null,"type":null},"content":[{"type":"text","text":"RLUnplugged DMLab"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"、"},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/deepmind-research\/tree\/master\/rl_unplugged#atari-dataset","title":null,"type":null},"content":[{"type":"text","text":"Atari"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 和 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/deepmind\/deepmind-research\/tree\/master\/rl_unplugged#realworld-rl-dataset","title":null,"type":null},"content":[{"type":"text","text":"Real World RL"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 数据集;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"用 RLDS 工具生成的三个 "},{"type":"link","attrs":{"href":"https:\/\/robosuite.ai\/","title":null,"type":null},"content":[{"type":"text","text":"Robosuite"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 数据集。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们的团队致力于在不久的将来迅速扩大这个清单,并且欢迎外界为 RLDS 和 TFDS 贡献新的数据集。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"结语"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"RLDS 生态系统不仅可以提高强化学习与序列决策问题研究的可重现性,还可以方便地进行数据的共享和重用。我们期望 RLDS 所提供的特性能够推动一种趋势,即发行结构化的强化学习数据集,包含所有的信息,并涵盖更广泛的智能体和任务。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"作者介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Sabela Ramos,Google AI 软件工程师。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Léonard Hussenot,谷歌研究室,大脑团队学生研究员。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"https:\/\/ai.googleblog.com\/2021\/12\/rlds-ecosystem-to-generate-share-and.html"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章