微软和谷歌的 AI 模型在 SuperGLUE 语言基准上超越了人类的表现

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隶属于 Facebook、纽约大学(NYU)、华盛顿大学和 DeepMind 的研究人员在 2019 年底推出了"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2019\/08\/14\/ai-researchers-launch-superglue-a-rigorous-benchmark-for-language-understanding\/amp\/","title":"","type":null},"content":[{"type":"text","text":"SuperGLUE"}]},{"type":"text","text":",这是一种新的人工智能基准,用于总结各种语言任务的研究进展。基于去年发布的 GLUE 基准,SuperGLUE 包含了一系列更难的语言理解挑战、改进的资源以及"},{"type":"link","attrs":{"href":"https:\/\/super.gluebenchmark.com\/leaderboard\/","title":"","type":null},"content":[{"type":"text","text":"公开的排行榜"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 SuperGLUE 推出时,在排行榜上,表现最好的模型和人类的表现有近 20 分的差距。但截至 1 月初,有两个模型,一个是来自微软的 DeBERTa,另一个是来自谷歌的 T5+Meena,它们已经超越了人类的基准线,成为第一批超越人类的模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"纽约大学数据科学中心助理教授 Sam Bowman 表示,这一成就反映了机器学习的创新,包括自监督学习,即模型从未标记的数据集中学习,并制定了将洞察力用于目标任务的方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“这些数据集反映了一些最难的监督语言理解任务数据集,这些数据集在两年前是免费提供的。没有理由相信 SuperGLUE 将能够检测到自然语言处理的进一步进展,至少会超过剩下的一小部分”,Sam Bowman说。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是 SuperGLUE 并非人类语言能力的完美测试,也并非完整测试。DeBERTa 背后的微软团队在一篇博文中也指出,他们的模型“绝非”达到自然语言理解的人类级智能。他们表示,这需要研究突破,以及衡量它们及其效果的新基准。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"SuperGLUE"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正如研究人员在介绍 SuperGLUE 的论文《"},{"type":"link","attrs":{"href":"https:\/\/w4ngatang.github.io\/static\/papers\/superglue.pdf","title":"","type":null},"content":[{"type":"text","text":"SuperGLUE:通用语言理解系统更严格的基准"}]},{"type":"text","text":"》("},{"type":"text","marks":[{"type":"italic"}],"text":"SuperGLUE: A Stickier Benchmark forGeneral-Purpose Language Understanding Systems"},{"type":"text","text":")所写的那样,他们的基准旨在成为一个简单的而又有难度的衡量标准,用以衡量英语通用语言理解技术的进展。它包括 8 个语言理解任务,它们来自于已有的数据,并配有性能度量和分析工具包。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这些任务是:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"布尔问题"},{"type":"text","text":"(Boolean Questions,"},{"type":"text","marks":[{"type":"strong"}],"text":"BoolQ"},{"type":"text","text":"):要求模型回答一个关于维基百科文章中包含答案的短文的问题。这是一些谷歌用户通过谷歌搜索提交的问题。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"CommitmentBank"},{"type":"text","text":"("},{"type":"text","marks":[{"type":"strong"}],"text":"CB"},{"type":"text","text":"):要求模型识别 文本中包含的假设,包括《华尔街日报》的信息来源,并确定该假设是否成立。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"合理选择"},{"type":"text","text":"(Choice of plausible alternatives,"},{"type":"text","marks":[{"type":"strong"}],"text":"COPA"},{"type":"text","text":"): 提供了一个关于博客主题的前提语句,以及一本与摄影相关的百科全书,模型必须从中确定两种可能选择的因果关系。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"多句阅读理解"},{"type":"text","text":"(Multi-Sentence Reading Comprehension,"},{"type":"text","marks":[{"type":"strong"}],"text":"MultiRC"},{"type":"text","text":"):这是一项问答式的任务,其中每个样本都包含一段上下文段落、一个关于该段落的问题,以及一系列可能的答案。一种模型必须预测哪些答案是真的,哪些答案是假的。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"基于常识推理数据集的阅读理解"},{"type":"text","text":"(Reading Comprehension with Commonsense Reasoning Dataset,"},{"type":"text","marks":[{"type":"strong"}],"text":"ReCoRD"},{"type":"text","text":"):模型根据 CNN 和《每日邮报》的选文列表中预测被掩盖的单词和短语,在这些选文中,同一单词或短语可能以多种不同的形式表达,所有这些都被认为是正确的。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"识别文本内容"},{"type":"text","text":"(Recognizing Textual Entailment,"},{"type":"text","marks":[{"type":"strong"}],"text":"RTE"},{"type":"text","text":"):挑战自然语言模型,以确定一个文本摘录的真实性是否来自另一个文本摘录。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Word-in-Context"},{"type":"text","text":"("},{"type":"text","marks":[{"type":"strong"}],"text":"WiC"},{"type":"text","text":"):为两个文本片段和一个多义词(即具有多重含义的单词)提供模型,并要求它们判定这个单词是否在两个句子中有相同的含义。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Winograd 模式挑战"},{"type":"text","text":"(Winograd Schema Challenge,"},{"type":"text","marks":[{"type":"strong"}],"text":"WSC"},{"type":"text","text":"):是一项任务,在这项任务中,模型给定小说书中的段落,必须回答关于歧义代词先行词的多项选择题。它被设计为图灵测试的改进。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SuperGLUE 也尝试在 Winogender 图式的模型中测量性别偏见,这些模型是仅由句子中某一代词的性别不同的句对。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但,研究人员指出,这种方法有其局限性,因为它只能提供积极的预测值:较差的偏见分数清楚地表明模型显示出性别偏见,而良好的分数并不意味着模型是无偏见的。而且,它并不包括一切形式的性别或社会偏见,因此它只是一种粗略的偏见衡量标准。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了建立人类表现的基线,研究人员借鉴了 WiC、MultiRC、RTE 和 ReCoRD 的现有文献,并通过亚马逊的 Mechanical Turk 平台雇佣了众包注释员。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每个众包人员每小时的平均工资为 23.75 美元,他们完成了一个短期培训阶段,之后才会使用说明和常见问题来对多达 30 个选定测试集样本进行注释。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"架构改进"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"尽管 DeBERTa 背后的微软研究人员在 1 月 6 日发表的一篇题为《"},{"type":"link","attrs":{"href":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark\/","title":"","type":null},"content":[{"type":"text","text":"微软 DeBERT 在 SuperGLUE 基准上超越人类"}]},{"type":"text","text":"》("},{"type":"text","marks":[{"type":"italic"}],"text":"Microsoft DeBERTa surpasses human performance on the SuperGLUE benchmark"},{"type":"text","text":")的博文中提供了他们的工作细节,但是谷歌团队还没有提供关于其模型性能改进的细节。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeBERTa 并非新鲜事,它是去年开源的,但研究人员表示,他们已经训出练一个包含 15 亿个参数(即模型用来进行预测的内部变量)的更大版本。它将以开源的方式发布,并集成到下一个版本的微软图灵自然语言表示模型中,支持诸如 Bing、Office、Dynamics 和 Azure 认知服务等产品。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeBERTa 是通过掩蔽语言建模进行预训练的,这是一项填空任务,教会模型使用与被掩蔽标记相关的词来预测被掩蔽的词应该是什么。DeBERTa 利用上下文词的内容和位置信息来建立掩蔽语言模型,比如它能够识别出“a new store opened beside the new mall”句子中的“store”和“mall”扮演着不同的句法角色。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"与其他一些模型不同的是,DeBERTa 在语言建模过程中将词的绝对位置考虑在内。此外,它还对模型中转换后的输入数据进行参数计算,并根据词的相对位置衡量词之间依赖关系的强弱。举例来说,DeBERTa 会理解“deep”和“learning”这两个词相邻出现时,依赖关系要比单独出现在不同句子中更强。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DeBERTa 还受益于对抗训练,这种技术利用对训练数据进行小幅度改变而获得的对抗样本。在训练过程中,这些对抗样本被输入到模型中,提高了模型的泛化能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"微软研究人员希望下一步探索如何让 DeBERTa 能够泛化到新的子任务或基本的解决问题的能力,这个概念被称为“合成泛化”(compositional generalization)。未来的一条路可能是更加明确地融合所谓的合成结构,这可能需要将人工智能与符号推理,换句话说,按照数学和逻辑规则操纵符号和表达式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“DeBERTa 在 SuperGLUE 上超越人类的表现标志着向通用人工智能迈进的重要里程碑,”微软研究人员写道。“但与 DeBERTa 不同的是,人类非常善于利用从不同任务中学到的知识来解决一个新的任务,并不需要或很少需要特定任务的演示。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"新基准"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据 Bowman 的说法,SuperGLUE 的继任者尚未出现,至少在短期内是如此。但是人工智能研究界越来越多的共识是,未来的基准,特别是在语言领域,要起作用,就必须考虑到更广泛的伦理、技术和社会挑战。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如,一些研究表明,流行的基准在评估现实中的人工智能性能方面效果不佳。一份最新"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2020\/08\/12\/natural-language-benchmarks-dont-measure-ai-models-general-knowledge-well-research-shows\/","title":"","type":null},"content":[{"type":"text","text":"报告"}]},{"type":"text","text":"显示,自然语言处理模型给出的 60%~70% 的答案都嵌入在基准训练集中,这表明模型通常只是在记忆答案。在对超过 3000 篇人工智能论文进行的元分析中,另一项研究发现,用来衡量人工智能和机器学习模型的指标往往不一致,追踪不规则,并且信息也不特别丰富。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一部分原因是因为诸如 OpenAI 的"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2020\/07\/24\/ai-weekly-the-promise-and-shortcomings-of-openais-gpt-3\/","title":"","type":null},"content":[{"type":"text","text":"GPT-3"}]},{"type":"text","text":"、谷歌的 T5+Meena 和微软的 DeBERTa 这样的语言模型,通过将公共网络中的样本内化,学会了如何写出与人类相似的文本。它们使用诸如电子书、维基百科和 Reddit 这样的社会媒体平台来推断整句话甚至整段话。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"结果是,语言模型经常会放大这些公共数据中编码的偏见;部分培训数据"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2020\/08\/07\/researchers-quantify-bias-in-reddit-content-sometimes-used-to-train-ai\/","title":"","type":null},"content":[{"type":"text","text":"并非不常见"}]},{"type":"text","text":",它们来自具有普遍性别、种族和宗教偏见的社区。 OpenAI 是一家人工智能研究公司,它指出,这可能导致把像“naughty”或“sucked”这样的词放在女性代词旁边,把“Islam”放在 terrorism 旁边。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"今年 4 月,英特尔、麻省理工学院以及加拿大人工智能项目 CIFAR 的研究人员发表了一份研究报告,报告指出,一些最流行的模型存在着很强的刻板印象,包括谷歌的"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2018\/11\/02\/google-open-sources-bert-a-state-of-the-art-training-technique-for-natural-language-processing\/","title":"","type":null},"content":[{"type":"text","text":"BERT"}]},{"type":"text","text":"和"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2019\/06\/21\/google-brains-xlnet-bests-bert-at-20-nlp-tasks\/","title":"","type":null},"content":[{"type":"text","text":"XLNet"}]},{"type":"text","text":"、OpenAI 的"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2019\/08\/20\/openai-releases-curtailed-version-of-gpt-2-language-model\/","title":"","type":null},"content":[{"type":"text","text":"GPT-2"}]},{"type":"text","text":"和 Facebook 的"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2019\/07\/29\/facebook-ais-roberta-improves-googles-bert-pretraining-methods\/","title":"","type":null},"content":[{"type":"text","text":"RoBERTa"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"据 Middlebury Institute of International Studies 称,恶意行为者可能会利用这种偏见,通过传播错误信息、虚假信息和彻头彻尾的谎言来煽动不和谐,从而“使个人处于极端的极右思想和行为之中,成为暴力的个人”。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大部分已有的语言基准不能捕捉到这一点。在 SuperGLUE 发表的两年中,它的发现激发了人们,或许未来的基准可以做到这一点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kyle Wiggers,技术记者,现居美国纽约市,为 VentureBeat 撰写有关人工智能的文章。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/venturebeat.com\/2021\/01\/06\/ai-models-from-microsoft-and-google-already-surpass-human-performance-on-the-superglue-language-benchmark\/"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章