基于ALBERT的文本相似度解决方案

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"一、引言"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 公司很多项目都有这么一个需求:新的一份文本需要到历史库中,看是否存在类是的文本。在自然语言处理中这类问题属于文本相似度计算的范畴,简单而言:就是给一个被计较的文本a,和一个可能存在相似文本的集合C,找出集合C中所有和文本a相似的文本。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"二、思路探索"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 整体思路大体可以分为两种:一是,文本间直接进行相似度;二是,针对文本提取特征,对文本特征间进行相似度计算。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.1 文本之间进行相似度计算"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.1算法有吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 答案是肯定。例如BERT、ALBERT包括词向量(下文称之为:word2vec)等等,很多算法都是可以支持的。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.2 方案优点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 不用对文本进行\"深加工\""}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 调用算法运算成本低(后面详解)"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.1.3 方案缺点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 文本中有很多\"无意义\"的描述,对这些描述进行相似度计算浪费计算资源的同时还会影响最终结果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 文本可能过长,导致模型的输入不能一次接受,需要拆分成多次,这就比较复杂了。比如:下图中有两个文本T1和T2,他们都是在描述A、B、C三件事,但是描述的字数和顺序可能不尽相同。假设总共都有1000个字,以ALBERT为例,他只能输入最多500的字,那么只能对T1,T2拆分成4块,每块250个字,就会出现T1的某一块都是描述A,而T2描述A和B的一部分,这就会给模型的识别造成不必要的困扰。["},{"type":"text","marks":[{"type":"strong"}],"text":"注:这种情况下只会进行4*4=16次相似度计算"},{"type":"text","text":"]"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/16/16e00abd22d69e4cfddf80031c434a69.png","alt":null,"title":"文本间直接比较示意图","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 虽然词向量可以解决输入长度的问题,但是文本中所有词的词向量之后再如何进行相似度计算呢?而且词向量对一词多义的词语无法处理,不能很好的结合语义特征。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.2 文本特征间进行相似度计算"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.1算法有吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 文本间相似度计算的算法,都是可以被挪用到特征间相似度计算。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.2 方案优点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 不用考虑文本拆分不好造成的不利影响"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 数据标注简单"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2.2.3 方案缺点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 调用算法成本高"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 还是上面那个例子,可以对T1和T2文本进行关键词的提取,比如说T1提取了40个关键词,T2提取了30个关键词,那么就需要调用1200次相似度算法进行计算,而文本间只需要调用16次。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.3 最终选择"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 虽然上述两者方案都有各自的优缺点,但是考虑到相似文本数据少,如果采用文本间相似度计算,标注数据会比较困难,而且中文博大精深,一个字的不同都会导致句子的含义不同;而采用特征间相似度计算,只需要对特征(比如:关键词)进行相似词语进行标记就可以了,虽然会耗费计算资源,可以多部署几套进行解决,因此,最终采用文本间特征相似度计算。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"三、关键词相似度计算"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 文本的特征有多种多样,综合多方面考虑选择\"关键词\"作为文本的特征进行相似度计算(考虑:文本相似度计算,在进入人工智能算法计算相似度之前,会对文本的主体包括人名、地址、机构、职级等一些结构化信息先进行判断,对送入算法的本文其实只需要考虑文本间\"关键词\"的相似与否即可,不必要对上篇的文本在进行逐一地判别)。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.1 关键词提取算法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 关键词提取算法有很多,都各有优劣,下面只介绍常用其中的几种:"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.1 TF-IDF关键词提取办法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TF-IDF,即:词频-逆文件频率。是一种用于资讯检索与资讯探勘的常用加权技术。TF-IDF是一种统计方法,用以评估一字词对于一个文件集或一个语料库中的其中一份文件的重要程度。字词的重要性随着它在文件中出现的次数成正比增加,但同时会随着它在语料库中出现的频率成反比下降。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 示例:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a4/a49047b0b17eb7138ea8512e95a37d4c.png","alt":null,"title":"TF-IDF提取示例","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TF-IDF算法的优点是简单快速,结果比较符合实际情况。缺点是,单纯以\"词频\"衡量一个词的重要性,不够全面,有时重要的词可能出现次数并不多。而且,这种算法无法体现词的位置信息,出现位置靠前的词与出现位置靠后的词,都被视为重要性相同,这是不正确的。(一种解决方法是,对全文的第一段和每一段的第一句话,给予较大的权重。)"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.2 TextRank算法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"link","attrs":{"href":"http://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf","title":""},"content":[{"type":"text","text":"TextRank "}]},{"type":"text","text":"算法是一种用于文本的基于图的排序算法。其基本思想来源于谷歌的 "},{"type":"link","attrs":{"href":"http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf","title":""},"content":[{"type":"text","text":"PageRank"}]},{"type":"text","text":"算法, 通过把文本分割成若干组成单元(单词、句子)并建立图模型, 利用投票机制对文本中的重要成分进行排序, 仅利用单篇文档本身的信息即可实现关键词提取、文摘。和 LDA、HMM 等模型不同, TextRank不需要事先对多篇文档进行学习训练, 因其简洁有效而得到广泛应用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" TextRank 一般模型可以表示为一个有向有权图 G =(V, E), 由点集合 V和边集合 E 组成, E 是V ×V的子集。图中任两点 Vi , Vj 之间边的权重为 wji , 对于一个给定的点 Vi, In(Vi) 为 指 向 该 点 的 点 集 合 , Out(Vi) 为点 Vi 指向的点集合。点 Vi 的得分定义如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d3/d39008c6fb221d5af8d8128f94d71855.png","alt":null,"title":"TextRank得分定义","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 其中, d 为阻尼系数, 取值范围为 0 到 1, 代表从图中某一特定点指向其他任意点的概率, 一般取值为 0.85。使用TextRank 算法计算图中各点的得分时, 需要给图中的点指定任意的初值, 并递归计算直到收敛, 即图中任意一点的误差率小于给定的极限值时就可以达到收敛, 一般该极限值取 0.0001。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.3 词向量聚类"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 词向量就是用向量来代表词,常见的词向量算法有:Word2Vec、GloVe、ELMo等。该种方法就是首先利用词向量算法训练语料获取词向量特征,在通过K-mens算法进行聚类,再根据聚类中心获取K个离聚类中心最近的词作为关键词。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 优点:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"​\t\t速度快、提取的结果准确性高"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"​\t缺点:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 词向量很难包括特定任务下的所有关键词"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"​\t\t关键词需要人工添加,较为麻烦"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"​\t\t只提取K个关键词,其他的词语被默认舍去,极有可能提取不全"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"​\t\t词向量训练虽然简单,但是他们往往过大,第一次加载速度很慢"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3.1.4 BERT或ALBERT利用NER方式提取关键词"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" BERT和ALBERT是最近两年来广受好评的模型,NER即命名实体识别。在\"智慧法医\"项目中曾利用BERT和ALBERT做过NER,主要去识别伤情语句中的伤情部位、大小、类型;因此将关键词抽象成需要被提取的一个实体,这样做的好处在于:针对不同任务,可以自定义\"关键词\",比如说\"智慧法医\"中认为\"伤情部位\"是关键词,\"案由提取\"中将案由作为\"关键词\",这样一来模型的复用性极大提高,他不再是解决某一个问题,还是在解决某一类问题;模型泛化能力强,上面提到的几个关键词提取的方法都有一个共同的缺点,就是需要进行中文分词,分词的结果将极大影响关键词提取结果,而BERT和ALBERT算法可以基于\"字\"去训练模型,不再受到\"词\"的约束,因此使得模型的泛化能力更强。缺点在于:标注数据以及计算时间受到文本长度的影响,但是都能够在3s内返回。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"综合上面多种关键词提取算法的对比,最终采用ALBERT + BILSTM + CRF 的关键词提取算法。"},{"type":"text","text":"(BILSTM:双向长短时记忆网络,CRF:条件随机场)"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.2 相似度计算算法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 由于在之前的\"关键词提取\"上采用ALBERT+BILSTM+CRF,因此在相似计算的算法上也采用ALBERT。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":" 为什么使用ALBERT而不使用BERT?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ALBERT是BERT的改进,主要通过\"因式分解\"以及\"参数共享\"机制对BERT进行改造,但整体的模型结构、输入输出都没有发生任何变化,但是ALBERT的收敛速度更快,预测时间更短(约为BERT的十分之一),模型更小(ALBERT_tiny只有10几M),而且ALBERT的精度和BERT相当。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 算法结构("},{"type":"text","marks":[{"type":"strong"}],"text":"图中的BERT,在实际中被替换成了ALBERT"},{"type":"text","text":"):"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f7/f7aef4cd9bde7c8139b07f66dbf6f0aa.png","alt":null,"title":"模型结构图","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在实际使用中,关键词1之前会用CLS标识,关键词2之前用SEP进行标识;在输出部分只取C,即句子向量的输出,之后会把输出接上一个全连接层,做一个2分类(1:相似,0:不相似)。至此,就完成了对相似度计算模型的搭建。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"四、实践实例:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 目前整个一套文本相似度算法,在项目中使用的流程图如下:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c6/c6338bd653fc4bf213d6c9b729f7c0c4.png","alt":null,"title":"实践实例流程图","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" 注:最终的输出是ALBERT相似计算结果,以及文本结构化信息的加权结果。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"五、总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 整个文本相似度计算算法复用性强,对关键词提取准确,相似度计算结果较好;但是它依旧存在着一个缺陷,如果单次需要被比较的关键词数量很多,时间会有点长,不过目前的实时性还是不错的,目前针对1200对关键词计算相似度时间在3s左右;如果对实时性还有更高的要求可以多部署几套,并且目前的计算时间还在进一步优化中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章