如何利用TensorFlow Hub 让BERT开发更简单?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在自然语言处理领域,"},{"type":"link","attrs":{"href":"https:\/\/ai.googleblog.com\/2018\/11\/open-sourcing-bert-state-of-art-pre.html","title":null,"type":null},"content":[{"type":"text","text":"BERT"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 和其他 "},{"type":"link","attrs":{"href":"https:\/\/ai.googleblog.com\/2017\/08\/transformer-novel-neural-network.html","title":null,"type":null},"content":[{"type":"text","text":"Transformer"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 编码器架构都非常成功,无论是推进学术基准的技术水平,还是在 "},{"type":"link","attrs":{"href":"https:\/\/blog.google\/products\/search\/search-language-understanding-bert\/","title":null,"type":null},"content":[{"type":"text","text":"Google Search"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 这样的大规模应用中,均是如此。BERT 自 TensorFlow 创建以来一直可用,但它最初依赖于非 TensorFlow 的 Python 代码,以将原始文本转换为模型输入。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"如今,在 TensorFlow 中构建 BERT 会更加简单。开发者可在 "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/bert\/1","title":null,"type":null},"content":[{"type":"text","text":"TensorFlow Hub"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 上使用"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"预训练编码器"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和匹配的"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"文本预处理"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"模型。在 TensorFlow 中运行 BERT 对文本输入的操作只需要几行代码:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"# Load BERT and the preprocessing model from TF Hub.\npreprocess = hub.load('https:\/\/tfhub.dev\/tensorflow\/bert_en_uncased_preprocess\/1')\nencoder = hub.load('https:\/\/tfhub.dev\/tensorflow\/bert_en_uncased_L-12_H-768_A-12\/3')\n\n\n# Use BERT on a batch of raw text inputs.\ninput = preprocess(['Batch of inputs', 'TF Hub makes BERT easy!', 'More text.'])\npooled_output = encoder(input)[\"pooled_output\"]\nprint(pooled_output)\n\n\ntf.Tensor(\n[[-0.8384154 -0.26902363 -0.3839138 ... -0.3949695 -0.58442086 0.8058556 ]\n [-0.8223734 -0.2883956 -0.09359277 ... -0.13833837 -0.6251748 0.88950026]\n [-0.9045408 -0.37877116 -0.7714909 ... -0.5112085 -0.70791864 0.92950743]],\nshape=(3, 768), dtype=float32"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这些编码器和预处理模型已经用 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/tensorflow\/models\/tree\/master\/official\/nlp","title":null,"type":null},"content":[{"type":"text","text":"TensorFlow Model Garden"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 的 NLP 库构建,并以 "},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/hub\/tf2_saved_model","title":null,"type":null},"content":[{"type":"text","text":"SavedModel 格式"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"导出到 TensorFlow Hub。实际上,预处理使用 "},{"type":"link","attrs":{"href":"https:\/\/blog.tensorflow.org\/2019\/06\/introducing-tftext.html","title":null,"type":null},"content":[{"type":"text","text":"TF.text"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 库中的 TensorFlow ops 输入文本进行标记化:允许开发者建立自己的 TensorFlow 模型,将原始文本输入到预测输出,而无需使用 Python 的循环。这样可以提高计算速度,去除样板代码,减少出错的可能性,并且可以将整个文本序列化为输出模型,使得 BERT 在生产环境中更容易使用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"为了详细说明这些模型的具体作用,我们发布了两个新的教程:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/tutorials\/text\/classify_text_with_bert","title":null,"type":null},"content":[{"type":"text","text":"初级"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"教程:解决一项情感分析任务,不需要任何特殊定制,就能得到很好的模型质量。这是最简单的使用 BERT 和预处理模型的方法。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/tutorials\/text\/solve_glue_tasks_using_bert_on_tpu","title":null,"type":null},"content":[{"type":"text","text":"高级"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"教程:解决了在 TPU 上运行 "},{"type":"link","attrs":{"href":"http:\/\/gluebenchmark.com\/","title":null,"type":null},"content":[{"type":"text","text":"GLUE 基准"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"中的自然语言处理分类任务。它还说明了如何在需要多段输入的情况下使用预处理模型。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/e3\/e35307fe47b1a513a0fc27aef2cb932d.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"选择 BERT 模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BERT 模型是在大型文本语料库(例如,Wikipedia 文章的归档)上使用自我监督任务进行预训练的,比如根据上下文预测句子中的单词。这种类型的训练使模型能够在没有标记数据的情况下学习文本语义的强大表示。但是训练它需要大量的计算:在 16 个 TPU 上花费 4 天的时间(如 2018 年 "},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1810.04805","title":null,"type":null},"content":[{"type":"text","text":"BERT 论文"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"所报道的)。所幸的是,在这种昂贵的预训练完成一次后,我们就可以为许多不同的任务高效地重用这种丰富的表示了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"八个 "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/bert\/1","title":null,"type":null},"content":[{"type":"text","text":"BERT 模型"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"是与 BERT 原始作者发布的训练权重一起提供的。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"24 个 "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/bert\/1","title":null,"type":null},"content":[{"type":"text","text":"Small BERT"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 具有相同的通用架构,但 Transformer 会更少或更小,这让你可以探索速度、尺寸和质量之间的权衡。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/albert\/1","title":null,"type":null},"content":[{"type":"text","text":"ALBERT"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":":这是四种不同大小的“A Lite Bert”,通过在层之间共享参数来减少模型大小(但不是计算时间)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"8 个 "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/experts\/bert\/1","title":null,"type":null},"content":[{"type":"text","text":"BERT Experts"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 都具有相同的 BERT 架构和大小,但是为预训练域和中间微调任务提供了不同的选择,以便更好地配合目标任务。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/electra\/1","title":null,"type":null},"content":[{"type":"text","text":"Electra"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 具有与 BERT 相同的架构(有三种不同的大小),但在预训练时作为判别器,类似于生成对抗网络(Generative Adversarial Network,GAN)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"BERT 与 Talking-Heads Attention 和 Gated GELU "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/tensorflow\/talkheads_ggelu_bert_en_base\/1","title":null,"type":null},"content":[{"type":"text","text":"[base"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":", "},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/tensorflow\/talkheads_ggelu_bert_en_large\/1","title":null,"type":null},"content":[{"type":"text","text":"large"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"] 对 Transformer 架构的核心进行了两个改进。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/tensorflow\/lambert_en_uncased_L-24_H-1024_A-16\/1","title":null,"type":null},"content":[{"type":"text","text":"Lambert"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 已经接受了一些由 LAMB 优化器和 Roberta 提供的技术训练。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"......."}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"这些模型是 BERT 编码器。上述链接将可以访问 TF Hub 上的文档,其中提到了各自所使用的正确的预处理模型。我们建议开发者访问这些模型页面,以便了解更多关于每个模型所针对的不同应用场景。基于其通用界面,通过更改编码器模型及其预处理的 URL,可以方便地对不同编码器进行特定任务的性能实验和比较。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"预处理模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"对于每个 BERT 编码器,都有一个匹配的预处理模型。它使用 "},{"type":"codeinline","content":[{"type":"text","text":"TF.text"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 库提供的 TensorFlow ops,它可以将原始文本转换为编码器所期望的数字输入时序。不像纯 Python 的预处理那样,这些操作可以作为 TensorFlow 模型的一部分,用于直接从文本输入中提供服务。每个 TF Hub 的预处理模型都已经配置了词汇表及其相关的文本归一化逻辑,无需进行进一步的设置。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"前面我们已经介绍了最简单的预处理模型的使用方法,接下来让我们仔细看看。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"preprocess = hub.load('https:\/\/tfhub.dev\/tensorflow\/bert_en_uncased_preprocess\/1')\ninput = preprocess([\"This is an amazing movie!\"])\n \n{'input_word_ids': ,\n 'input_mask': ,\n 'input_type_ids': }"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"像这样调用 "},{"type":"codeinline","content":[{"type":"text","text":"preprocess()"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 可以将原始文本输入转换为固定长度的 BERT 编码器输入序列。你可以看到,它由一个张量 "},{"type":"codeinline","content":[{"type":"text","text":"input_word_ids"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 组成,其中包含了每个标记化输入的数字 id,包括开始、结束和填充标记,再加上两个辅助张量:一个 "},{"type":"codeinline","content":[{"type":"text","text":"input_mask"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(用于区分非填充和填充标记)和每个标记的 "},{"type":"codeinline","content":[{"type":"text","text":"input_type_ids"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(可以区分每个输入的多个文本段,我们将在下面讨论)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"相同的预处理 SavedModel 还提供了更细粒度的 API,支持在编码器的一个输入序列中使用一个或两个不同的文本段。下面我们来看一个句子蕴含任务:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"text_premises = [\"The fox jumped over the lazy dog.\",\n \"Good day.\"]\ntokenized_premises = preprocess.tokenize(text_premises)\n \n\n \ntext_hypotheses = [\"The dog was lazy.\", # Entailed.\n \"Axe handle!\"] # Not entailed.\ntokenized_hypotheses = preprocess.tokenize(text_hypotheses)\n \n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"每个标记化的结果是一个数字 "},{"type":"codeinline","content":[{"type":"text","text":"token id"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 的 "},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/guide\/ragged_tensor","title":null,"type":null},"content":[{"type":"text","text":"RaggedTensor"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",完整地表示每一个文本输入。如果某些前提和假设对太长,无法在下一步用于 BERT 输入的 "},{"type":"codeinline","content":[{"type":"text","text":"seq_length"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 内适应,则可以在这里进行额外的预处理,比如修剪文本段或将其分割成多个编码器输入。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"然后,将标记化的输入打包为用于 BERT 编码器的固定长度的输入序列:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"encoder_inputs = preprocess.bert_pack_inputs(\n [tokenized_premises, tokenized_hypotheses],\n seq_length=18) # Optional argument, defaults to 128.\n \n{'input_word_ids': ,\n 'input_mask': ,\n 'input_type_ids': }"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"打包的结果是已经熟悉的 "},{"type":"codeinline","content":[{"type":"text","text":"input_word_ids"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"、"},{"type":"codeinline","content":[{"type":"text","text":"input_mask"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 和 "},{"type":"codeinline","content":[{"type":"text","text":"input_type_ids"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(第一个和第二个输入分别为 0 和 1)。所有输出都有一个公共的 "},{"type":"codeinline","content":[{"type":"text","text":"seq_length"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"(默认为 128)。在打包过程中,超过 "},{"type":"codeinline","content":[{"type":"text","text":"seq_length"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" 的输入被截断为大致相等的大小。 "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"加速模型训练"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"TensorFlow Hub 将 BERT 编码器和预处理模型作为独立的部分,用于加速训练,特别是在 TPU 上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"张量处理单元(Tensor Processing Units,TPU)是 Google 定制开发的加速器硬件,它擅长于大规模机器学习计算,比如对 BERT 所需的计算进行微调。TPU 工作在密集的张量上,并期望像字符串这样的可变长度数据,已由主机 CPU 转换为固定大小的张量。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"由于 BERT 编码器模型与其相关的预处理模型之间的解耦,可以将编码器微调计算作为模型训练的一部分分配给 TPU,而预处理模型则在主机 CPU 上执行。通过使用 "},{"type":"codeinline","content":[{"type":"text","text":"tf.data.Dataset.map()"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",可以在数据集中异步运行预处理计算,并且TPU上的编码器模型可以消耗密集的输出。这种异步预处理还可以改善其他加速器的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"我们的"},{"type":"link","attrs":{"href":"https:\/\/www.tensorflow.org\/tutorials\/text\/solve_glue_tasks_using_bert_on_tpu","title":null,"type":null},"content":[{"type":"text","text":"高级"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" BERT 教程可以在使用 TPU 工作器的 Colab 运行时中运行,并演示了这种端到端的方式。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"在 TensorFlow 中使用 BERT 和类似的模型已经变得更加简单了。TensorFlow Hub 提供了"},{"type":"link","attrs":{"href":"https:\/\/tfhub.dev\/google\/collections\/transformer_encoders_text\/1","title":null,"type":null},"content":[{"type":"text","text":"大量"}],"marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}]},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"预训练 BERT 编码器"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"和"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"文本预处理模型"},{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":",只需几行代码就能很容易地使用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"作者介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"Arno Eigenwillig,软件工程师。 Luiz GUStavo Martins,开发技术推广工程师。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}},{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"color","attrs":{"color":"#494949","name":"user"}}],"text":"https:\/\/blog.tensorflow.org\/2020\/12\/making-bert-easier-with-preprocessing-models-from-tensorflow-hub.html"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章