Facebook打造第一视角视频数据集Ego4D:捕获超3000小时镜头,剑指下一代AI

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"据了解,Ego4D 是目前最大的第一视角日常活动视频数据集。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"第一视角视频数据集 Ego4D"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近日,Facebook 公布了一项名为 "},{"type":"link","attrs":{"href":"https:\/\/ego4d-data.org\/","title":"xxx","type":null},"content":[{"type":"text","text":"Ego4D"}]},{"type":"text","text":" 的研究项目。该项目为 Facebook 与全球 13 所大学和实验室合作项目,通过收集第一人称镜头,以训练下一代人工智能模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"信息显示,Ego4D 数据集包含"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2110.07058","title":"xxx","type":null},"content":[{"type":"text","text":"超过 3025 个小时的视频"}]},{"type":"text","text":",由来自 9 个国家(美国、英国、印度、日本、意大利、新加坡、沙特阿拉伯、哥伦比亚和卢旺达)73 个不同地点录制的视频组成,总录制人数达 855 人。据了解,这些参与者拥有不同的年龄和背景,有些人是因其有趣的职业而被招募过来,例如面包师、机械师、木匠和园艺师。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/6d\/5a\/6d1c21f9eb29f19cfe25ff99dd4ec75a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这也是目前最大的"},{"type":"link","attrs":{"href":"https:\/\/www.cnbc.com\/2021\/10\/14\/facebook-announces-ego4d-first-person-video-data-set-for-training-ai.html","title":"xxx","type":null},"content":[{"type":"text","text":"第一视角日常活动视频数据集"}]},{"type":"text","text":",在此之前,最大的第一视角视频数据集由人在厨房里 100 个小时的镜头组成。此外,以前的数据集通常由只有几秒钟的半脚本视频剪辑组成,而 Ego4D 的参与者一次佩戴头戴式摄像头长达 10 小时,并拍摄无脚本日常活动的第一人称视频,包括沿街散步、阅读、洗衣、购物、与宠物玩耍、玩棋盘游戏和与其他人互动。一些镜头还包括音频、有关参与者注视焦点位置的数据以及同一场景的多个视角。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/eb\/77\/ebe4abca3f7b1688e6d7998d423a9e77.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"收集到视频后,卢旺达的工作人员总共花费了 25 万个小时观看数千个视频剪辑,并编写数百万个描述拍摄场景和活动的句子。这些视频能够帮助人工智能理解或识别现实世界或虚拟世界中的某些事物,人类也可以通过一副眼镜或 Oculus 耳机从第一人称视角看到这些事物。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"纽约石溪大学和谷歌大脑的计算机视觉研究员 Michael Ryoo 表示:“这个数据集里的视频更接近人类所观察的世界,这在同类数据集中是第一个。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"值得一提的是,研究人员还列出了该项目的五大挑战:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"情景记忆:我的 X 在哪里?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"手与物体交互:物体在交互过程中如何变化?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"视听日记:谁说了什么,什么时候说的?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"社会交互:谁在与谁交互?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"预测:接下来会发生什么?"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Facebook 首席研究科学家 Kristen Grauman 在接受 CNBC 采访时表示,“这次发布的是一个开放数据集和研究挑战,它能促进我们内部和学术界外部进步,其他研究人员可以支持这些新问题,以更有意义、更大规模的方式共同解决它”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"据 Grauman 介绍,该数据集可以部署在 AI 模型中,用于训练机器人等技术以更快地了解世界。“在过去,机器人通过在自己做事来进行学习,现在,它们有机会根据人类经验从视频中学习。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Facebook 表示,Ego4D 数据集将在 2021 年 11 月底之前提供下载。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"隐私问题引担忧"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Ego4D 数据集虽然给下一代人工智能带来了更多的想象空间,但也不可避免地引发人们对于"},{"type":"link","attrs":{"href":"https:\/\/www.technologyreview.com\/2021\/10\/14\/1037043\/facebook-machine-learning-ai-vision-see-world-human-eyes\/","title":"xxx","type":null},"content":[{"type":"text","text":"隐私问题"}]},{"type":"text","text":"的担忧。Grauman 坦言:“在做 Ego4D 项目时,我们也意识到有一些隐私方面的工作需要做,尤其是当将隐私从探索性研究领域带出融入到产品中时。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Facebook 表示,只有在征得参与者同意后,数据才会包含面部和其他识别信息。同时,出于隐私考虑,对于大多数视频,数据已在发布前进行了去标识化处理,如已从视频中删除了个人身份信息,并模糊了旁观者的面部和车牌号码,此外,许多视频中的音频也被删除了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"面对质疑,Facebook 的发言人称,该公司预计将来会进一步引入隐私保护措施,“Ego4D 纯粹是为了促进更广泛科学界进步的研究,我们今天没有任何关于产品应用或商业用途的分享。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"伴随着人工智能技术快速发展,隐私问题一直是大家讨论的焦点。人工智能在获取和处理海量信息数据,不可避免会涉及个人隐私保护这一重要伦理问题,并且隐藏着不容忽视的隐私泄露风险。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当前,国内外都颁布了相应的法规保护用户隐私与数据安全。比如在欧洲,2018 年生效的《通用数据保护条例》(General Data Protection Regulation,GDPR)对个人数据的收集和使用进行了规范。数据保护条例并没有明确提及人工智能或机器学习,但对个人数据的大规模自动处理和自动决策非常重视。这意味着,凡是人工智能使用个人数据的地方,都属于该条例的范围,皆适用 GDPR 原则。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而至于 Facebook 的 Ego4D 数据集未来会在隐私保护上交出怎样的答卷,一切交给时间。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章