爱奇艺多语言台词机器翻译技术实践

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"7月3日下午,爱奇艺技术产品团队举办了","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"“i技术会”第16期","attrs":{}},{"type":"text","text":"技术沙龙,本次技术会的主题是“","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"NLP与搜索","attrs":{}},{"type":"text","text":"”。我们邀请到了来自","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"字节跳动、去哪儿和腾讯","attrs":{}},{"type":"text","text":"的技术专家,与爱奇艺技术产品团队共同分享与探讨NLP与搜索结合的魔力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中,来自爱奇艺的技术专家","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"张轩玮","attrs":{}},{"type":"text","text":"为大家带来了","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"爱奇艺多语言台词机器翻译技术实践","attrs":{}},{"type":"text","text":"的分享。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"福利!关注公众号“爱奇艺技术产品团队”,在后台回复关键词“NLP”,就可以获得本次i技术会嘉宾分享完整PPT和录播视频。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以下为“爱奇艺多语言台词机器翻译技术实践”分享精华内容,根据【i技术会】现场演讲整理而成。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次分享的第一部分是爱奇艺多语言台词机器翻译实践开展的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"相关背景","attrs":{}},{"type":"text","text":",第二部分是爱奇艺针对多语言台词机器翻译","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"模型的一些探索和优化","attrs":{}},{"type":"text","text":",最后是该模型在爱奇艺的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"落地与应用情况","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"01 爱奇艺多语言台词机器翻译实践的相关背景","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2019年6月,爱奇艺正式推出服务全球用户的产品","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"iQIYI App","attrs":{}},{"type":"text","text":",并通过中台系统为iQIYI App提供全球化运营支持,由此开启了","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"海外市场布局之路","attrs":{}},{"type":"text","text":"。作为影视内容服务商,其中必然涉及大量长视频,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"而长视频的出海,重要的一环就是台词翻译。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,爱奇艺已在","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"多个国家布局","attrs":{}},{"type":"text","text":",涉及多种语言的台词翻译,主要有泰语、越南语、印尼语、马来语、西班牙语、阿拉伯语等等语言,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"这就使得多语言翻译成为了迫在眉睫的现实需求。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,与通用翻译相比,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"台词翻译有一些独有的特点如:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)台词一般句子较短,上下文信息不足,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"歧义性大;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(2)很多台词来","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"源于OCR或ASR识别的结果","attrs":{}},{"type":"text","text":",会有错误,可能影响翻译质量;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(3)在台词对话中往往会涉及很多人物的指代,故而","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"角色名和代词的翻译","attrs":{}},{"type":"text","text":"对于台词翻译来说尤为重要;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(4)部分台词需要","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"结合视频场景信息","attrs":{}},{"type":"text","text":"才能进行语义消歧。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"爱奇艺海外多国布局的现实需要以及台词翻译的独有特点","attrs":{}},{"type":"text","text":"这两大因素使得台词场景下多语言机器翻译实践成为现实。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"02 多语言台词机器翻译模型的探索和优化","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"1.one-to-many翻译模型优化","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先介绍一下什么是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"one-to-many模型","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"One-to-many顾名思义,即通过不同语言翻译之间的参数共享,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"通过一种模型来达到翻译多种目标语言的目的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个模型设计的初衷是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"节约维护和训练成本","attrs":{}},{"type":"text","text":"。前面已经讲到,目前,爱奇艺已经布局到海外多个国家,这就涉及到多种语言的翻译,如果采用一种语言一个模型的方法,随着目标语言的增多,我们需要训练、部署和维护的模型也会越来越多,导致运营成本的增加。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"经调研,我们发现了one-to-many模型,它极大地减轻了模型的训练和部署维护的成本,可以充分","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"利用不同语言之间迁移学习的特点,起到相互促进的作用,从而提高模型效果。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图1是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"transformer架构","attrs":{}},{"type":"text","text":",是目前大多数机器翻译模型优化的主流框架,我们也是以此为基础,在上面进行优化。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/0b/0b31b09cdd3551658d836b43ca486a75.jpeg","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图1:transformer模型","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而针对one-to-many模型,我们借鉴近期大家较为熟悉的预训练模型","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"BERT","attrs":{}},{"type":"text","text":",设计了一个特定的输入形式。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/be/be39c8b1b6ad048b88cb7a442634b8bd.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图2","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每个输入的token的表达都是由三种embedding组成,分别是:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"token embeddings、segment embeddings、 position embeddings。","attrs":{}},{"type":"text","text":"我们把语言类型token作为单独的一个域,那它具有不同于内容的segment embeddings。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"segment embeddings由","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"两部分组成","attrs":{}},{"type":"text","text":",一个叫EA,一个叫EB。","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"EA相当于前面语言的token的segment,后面的EB就是内容的embeddings,不同语言的L是不同的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外语言token表达也会作为decoder的第一个输入作为指导模型的解码。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"2.融合台词上下文信息","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"刚才提到,台词翻译第一个显著特点就是文本较短,上下文信息不足,容易产生歧义。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里举个例子,比如“我想静静”就可能有两种意思,一是let me alone,二是 I miss Jingjing。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"仅凭文本,我们很难区分究竟是哪一个意思。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"但我们如果能够结合台词的上句和下句,就可以减少这种歧义性。","attrs":{}},{"type":"text","text":"比如,上下句分别是“你走吧”、“再见”,我们就可以知道他想说的是let me alone。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,我们设计了用","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"BERT style","attrs":{}},{"type":"text","text":"的方式融合台词上下文的模型,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"输入时将上文和下文分别与中心句进行拼接,以特定的分隔符做分隔,","attrs":{}},{"type":"text","text":"而在encoder输出,我们还会","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对上句和下句进行mask","attrs":{}},{"type":"text","text":",因为在解码这个时候,由于上下句在编码时已经被中心句吸收了相关信息,上句和下句已经不起太多作用。并且,如果不进行mask,有可能还会引来一些翻译错位的问题。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那么我们是如何融合上下文的呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/07/0785090da5ee3be60a0af66b32c95422.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图3","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也是在输入端,我们可以看到图3和图2的不同就在于我们除了把语言token和中心句用三种embedding向量融合之外,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"还会将上句“你走吧”和下句“再见”放在中心句的前后,","attrs":{}},{"type":"text","text":"然后以同样的方式,每个token也是三种embedding的相加融合,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"把上下文作为辅助信息,帮助中心句进行消歧。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们把语言、上句、中心句、下句分别标记为","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"EA、EB、EC、ED","attrs":{}},{"type":"text","text":"四种,对这四种信息进行区分,每一种标记都对应一种segment embedding。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个输入经过encoder之后,我们会对“你走吧”和“再见”进行mask,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"也就是在解码的时候隐藏上句和下句,减少它对解码的影响。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"3.增强编码能力","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除此之外我们还对编码端做了一些提高。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Transformer里面一个比较主要的组件就是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"attention","attrs":{}},{"type":"text","text":",","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"其中base版本包含8个head","attrs":{}},{"type":"text","text":"。我们为了强化attention,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"鼓励不同的head学习不同的特征,从而丰富模型的表征能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下图是这4种attention的示意图。","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"我们通过不同的mask策略实现不同的attention,图中黑色的方块代表mask掉的部分。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6e/6e4cd32667fb2a3176a217aed20d45b2.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图4","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"global attention:","attrs":{}},{"type":"text","text":"建模任意词之间的依赖关系;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"local attention:","attrs":{}},{"type":"text","text":"强制模型发掘局部的信息特征;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"forward and backward attention:","attrs":{}},{"type":"text","text":"代表建模模型序列顺序信息。forward只能看到前面,backward只能看到后面。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"通过人为设定特点的attention,我们强制不同的head学习不同的特征,避免产生冗余的情况。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除此之外我们还借鉴bert,使用","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Masked LM","attrs":{}},{"type":"text","text":"任务增强模型对文本的理解能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先将输入的某一个词进行mask,然后在输出端进行恢复。比如“你走吧”,“我想静静”,“再见”,其中,“走”,“见”,都会被mask,输出的时候再被恢复。这就使得encoder在这种任务中充分地学习到文本的表达,增进它对文本的理解。同时","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"将mlm loss乘以一定的权重加到总体的loss上,进行联合训练。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/38/386c6ce602631c339421a56d38eacc5d.jpeg","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图5:MLM模型","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"4.增强解码能力","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了增强模型解码端的能力,在训练阶段,我们要求解码端在预测每个token的同时,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"增加对全局信息的预测,同时增强模型解码端的全局前瞻能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/69/69082f571a99cc3b0ea1ba658e720e73.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图6","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如这里的G代表let me alone的embedding的平均向量,每个token都会预测这个向量,从而产生GLOBAL loss。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个好处就是你在解码每个token的时候,我们可以让模型也预计我们将要解码的信息,而不会","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"过分依赖于我们前面已经解码的信息","attrs":{}},{"type":"text","text":",这样就使得模型具有一定的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"未来规划的能力","attrs":{}},{"type":"text","text":"。同样这也会产生一个loss,这个loss会和总体loss进行加权求和,用了一个β,也是小于1的权重,和整体的模型进行联合训练。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"5.欠翻译和过翻译问题的解决","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"欠翻译和过翻译是模型在做翻译时可能会经常遇到的一些问题。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"欠翻译是指翻译的目标语言词语缺失,过翻译指的是目标语言词语冗余。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如上文提到的“你走吧、我想静静、再见”这个案例,就有可能在模型训练不到位的时候产生let alone,缺少了me,这就是所谓的欠翻译。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外,也有可能翻译成Let me me alone, 重复翻译me, 这就是所谓的过翻译。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这些都是我们不希望在翻译结果中出现的,而产生这两大问题的一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"本质原因就在于解码生成的信息和编码的信息不够对等。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/84/8466c076fa8451b58ed210894d8b26fc.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图7","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以我们增加了一个重建的模块,对其进行约束。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"重建模块对解码端的输出通过","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"一个反向翻译的decoder翻译成source","attrs":{}},{"type":"text","text":",也就是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"恢复输入","attrs":{}},{"type":"text","text":",从而使得解码端的信息和编码端的信息保持一致,约束解码端,从而减轻欠翻译和过翻译问题,同样它也会产生一个loss,和之前一样也是加到总体loss,进行联合训练。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"6.增强容错能力","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了以上的探索优化之外我们刚才也提到了一点,就是我们的台词字幕有很大一部分是来源于OCR或者ASR识别的结果, 难免会出现一些词识别错误的问题,如果我们不进行特定处理有可能影响最后的翻译质量。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以我们针对这个台词识别错误的问题,设计了一个容错模块。这个容错模块可以认为是纠错模块。我们借鉴了去年发表的一篇论文所提出的一个模型——","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"T-TA模型","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个模块类似大家应该很熟悉的transformer结构,但是我们在里面做了一些特定的处理:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先是它使用了一种叫做","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"language autoencoding","attrs":{}},{"type":"text","text":"的方式,输出的每个token","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"只能看到其他的token,但看不到它自己。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也就是说,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"它输出的表达是由它周边的token的意思来产生的。","attrs":{}},{"type":"text","text":"比如X1是错的,但X2,X3,X4是对的,在你经过大量的数据训练之后,可以通过X2、X3、X4去生成正确的X1,从而达到一种纠错的能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"那要怎么才能让每个token看不到它自己而只能看到周边呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/8b/8beb967d0d56868235764808b77610b2.jpeg","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图8","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其实也很简单,就是使用了一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对角线mask的方式。","attrs":{}},{"type":"text","text":"这样,每次它就只能看到他其他的token,中间的黑色对角是看不见的,也就看不到它自己。通过这个方式,对深黄色那部分进行这样的特定处理,从而实现一种纠错能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们可以注意到","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"它的Q也只是用到position embedding","attrs":{}},{"type":"text","text":",因为如果QKV和Self attention是一样的,再残差连接的话就会把token embedding给加到输出,相当于把你刚刚挖掉的部分又填补上了,会产生信息泄露的问题,这样就训练不出一种纠错的模块。所以Q只是position embedding。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个模块大致就是这样,但是它们是怎么融合到机器翻译模型中呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9e/9e07ca04a17151c168e9412c9fd2199b.jpeg","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图9","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其实,只要直接和我们之前介绍的那些encoder进行","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"相加融合","attrs":{}},{"type":"text","text":",两个encoder输入都是一样的,输出进行相加融合,融合之后再进入后面的两个decoder的处理,这样就可以对原始encoder的错误进行纠正。比如这里的“静”错输成了“净”,但T-TA的encoder却能输出正确结果,起到了纠错的作用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"7.代词翻译","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"刚才我们也提到,在台词翻译领域另一个重要问题就是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"代词的翻译。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因为在对话中我们会涉及到很多人物之间的指代,比如提到你、我、他等等,在不同的场景下,对应的翻译是不同的,这就大大提高了台词翻译的难度。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"遇到这种情况我们该怎么办呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对这个问题,我们首先可以看一下它的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"表达数量","attrs":{}},{"type":"text","text":"以及","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"表达场景","attrs":{}},{"type":"text","text":"。因为代词在中文里面可能很简单,就是你、我、他,可能也就最多3、4种或者4、5种,但在其他语言中未必是这样。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"泰语的代词第一人称就有12种表达,第二人称代词有15种表达,第三人称有5种表达。","attrs":{}},{"type":"text","text":"对于第一人称,这12个表达还会随着性别和使用场合的不同而发生变化。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对话人身份之间的差异,也会使得这种代词表达有所区别。","attrs":{}},{"type":"text","text":"这对于台词机器翻译来说,是一个巨大的挑战。所有这些不同场合都需要我们将其区分出来,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"而这项工作很难仅仅只通过文本来完成。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/e7/e7f6888d93aa67c957ccfdd3c27da051.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图10:中文-泰语人称代词对应表","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此,我们做了一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"融合视频场景信息的代词的语义增强。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先我们通过","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"人脸识别和声纹识别","attrs":{}},{"type":"text","text":"对齐台词和角色,通过这种对齐可以使得","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"每一句台词定位到它所处的场景。","attrs":{}},{"type":"text","text":"再将","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"角色人物属性","attrs":{}},{"type":"text","text":"比如性别,年龄,人物关系,身份等","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"标注好,使角色的信息更丰富、更加立体。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fe/fed09edee39662f40b399aa42cce2867.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图11","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"左边的模型里面有两个代词,就是“你”和“我”,右边的模块是对“我”和“你”的一些信息的编码。比如“我”就属于男性,年龄是青年,“我”和对话人之间的关系是朋友等等。这样分别对“我”和“你”进行编码,编码后用这些信息做一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"变换和降维","attrs":{}},{"type":"text","text":",分别加到对应的代词上,使得解码的时候,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"知道这个代词所处的场景及人物关系,从而使它能够解码出正确的代词翻译。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"8.成语翻译","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除代词外,成语的翻译在台词机器翻译中也是比较困难的一个部分。这是因为:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)随着多年演变,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"很多成语都不再只是它字面的意思,而是包含了很多引申义。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这时候如果我们不做特定处理的话,极有可能仅将字面意思翻译出来,影响翻译准确度。所以,我们需要其他的辅助信息,比如释义等。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(2)","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"有些成语具有语义独立的特点","attrs":{}},{"type":"text","text":",也就是说某个成语的含义和上下文没有那么大的关联。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对这两个特点,我们设计了针对成语翻译的模块,使用预训练的BERT,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对中文以及中文释义进行编码,直接替换encoder的成语输入和添加到encoder的输出","attrs":{}},{"type":"text","text":",来确保成语真正含义的表达能够在模型中学习得到。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c8/c81c1e41c946477fa47ea6b20008dac1.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图12","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":14}},{"type":"strong","attrs":{}}],"text":"9.角色名翻译","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对这一部分,我们是通过","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"增加特殊标识以及数据增强的方式","attrs":{}},{"type":"text","text":",使得模型学习到特定的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"拷贝能力","attrs":{}},{"type":"text","text":"。大部分的台词从中文翻译到对应的语言的时候,角色名都是以拼音作为翻译的。当然在一些不适宜拼音的语言中,也会有一些其他的对应关系,在这里我们暂且以拼音为例。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们首先将","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"人名替换成拼音","attrs":{}},{"type":"text","text":",因为这时候它的真正的文本已经不重要了,最重要的是它将要翻译的目标语言。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如在图13这个例子中,“你认识李飞吗?”,我们首先将李飞中文替换成拼音li fei,对其增加一个特殊的标识,这也就是想告诉模型:这部分是要拷贝过去的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/49/4998bdf4445a036b34369950214a28f2.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图13","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外,为了增加模型见过的拼音输入表达的数量,我们","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"通过训练集挖掘了人名和姓氏的模板将其与伪名字合并成增强的数据,将增强数据和原来的数据串在一起进行训练","attrs":{}},{"type":"text","text":",使得模型能学到足够的拷贝能力。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这种方式通过训练模型,使得机器能够识别这种标识以及里面的拼音,将其复制到对应的位置。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"03 多语言台词机器翻译在爱奇艺的落地应用","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们在多语言台词机器翻译模型上做了一些优化探索后,也对优化后模型的质检差错率做了一些评测,这里列举一部分。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/96/9606a1054fda611cd1193a3835037f0c.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图14:各语言质检差错率","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中的每种语言都有第三方机器、人工、自研机器三种翻译,其中,自研的机器翻译就是我们自己经过模型探索、优化后的效果。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"从图14可以看出,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"我们自研的翻译差错率已经明显低于第三方","attrs":{}},{"type":"text","text":",这个第三方指的目前市场上最好的第三方。在泰语、印尼语、英语等语言中,我们自研的机器翻译已经接近于人工,而在马来语、西班牙语、阿拉伯语的翻译中,自研翻译甚至已经超过人工。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,我们做的翻译主要应用在","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"国际站长视频出海","attrs":{}},{"type":"text","text":"的项目中,目前已经支持从","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"简体中文到印尼语,马来语,泰语,越南语,阿拉伯语,繁体中文","attrs":{}},{"type":"text","text":"等多种语言的翻译。","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章