概率語言模型3——訓練工具彙總

轉載自https://www.itread01.com/content/1547467935.html

 

傳統演算法

1) BerkeleyLM 是用java寫的,號稱跟KenLM差不多,記憶體比srilm小

https://github.com/adampauls/berkeleylm

2)MITLM (The MIT Language Modeling toolkit) 引數優化做的比較好

https://code.google.com/p/mitlm/ or https://github.com/mitlm/mitlm

3)SRILM(The SRI language modeling toolkit) 老牌語言模型工具,SRI(Stanford Research Institute)開發,c++版本

http://www.speech.sri.com/projects/srilm/

另外Maximum entropy (MaxEnt) language models is supported in the SRILM toolkit

https://phon.ioc.ee/dokuwiki/doku.php?id=people:tanel:srilm-me.en

4)IRSTLM (IRST language modeling toolkit) 義大利TrentoFBK-IRST實驗室開發處理較大規模的訓練資料,integrated into Moses 。

IRSTLM在訓練語言模型時採用了劃分詞典分塊訓練快速合併的方式,從而在訓練大規模語料時取得了優異的效能。

IRSTLM 訓練語言模型時分以下 5 步: 

  • a)在訓練語料上統計帶詞頻詞彙表;
  • b)按照詞頻均衡的原則將詞彙表劃分為若干個子詞彙表;
  • c)對各個子詞彙表統計 n-gram,這些 n-gram 必須以詞彙表中的詞彙開頭;
  • d)根據第四步的統計結果,建立多個子語言模型;
  • e)把所有的子語言模型融合成最終語言模型

http://hlt-mt.fbk.eu/technologies/irstlm or https://github.com/irstlm-team/irstlm

5)KenLM (Kenneth Heafield's language model toolkit) 最大特點是速度快、佔用記憶體少。號稱比SRILM要好一些,支援單機大資料的訓練,包括c++和python兩個介面

http://kheafield.com/code/kenlm/ or https://github.com/kpu/kenlm

6)Bigfatlm Provides Hadoop training of Kneser-ney language models, written in Java

https://github.com/jhclark/bigfatlm

7)Kylm (Kyoto Language Modeling Toolkit)  written in Java,Output in WFST format for use with WFST decoders

http://www.phontron.com/kylm/  or  https://github.com/neubig/kylm

8) OpenGrm Language modelling toolkit for use with OpenFst, makes and modifies n-gram language models encoded as weighted finite-state transducers (FSTs)

http://opengrm.org/

 

深度學習

1)RNNLM(Recurrent neural network language model toolkit)

http://rnnlm.org/ or http://www.fit.vutbr.cz/~imikolov/rnnlm/

2)BRNNLM (Bayesian recurrent neural network for language model)

http://chien.cm.nctu.edu.tw/bayesian-recurrent-neural-network-for-language-modeling

3)RWTHLM (RWTH Aachen University Neural Network Language Modeling Toolkit, includes feedforward, recurrent, and long short-term memory neural networks)

http://www-i6.informatik.rwth-aachen.de/web/Software/rwthlm.php

4)Character-Aware Neural Language Models,employs a convolutional neural network (CNN)over characters to use as inputs into an long short-term memory (LSTM)recurrent neural network language model (RNN-LM)

https://github.com/yoonkim/lstm-char-cnn

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章