到2018年3月7日爲止,本系列三篇文章已寫完,可能後續有新的內容的話會繼續更新。
python下進行lda主題挖掘(一)——預處理(英文)
python下進行lda主題挖掘(二)——利用gensim訓練LDA模型
python下進行lda主題挖掘(三)——計算困惑度perplexity
本篇是我的LDA主題挖掘系列的第二篇,介紹如何利用gensim包提供的方法來訓練自己處理好的語料。
gensim提供了多種方法:
速度較慢的:
具體參數說明及使用方法請參照官網:models.ldamodel – Latent Dirichlet Allocation
from gensim.models.ldamodel import LdaModel
# 利用處理好的語料訓練模型
lda = LdaModel(corpus, num_topics=10)
# 推斷新文本的主題分佈
doc_lda = lda[doc_bow]
# 用新語料更新模型
lda.update(other_corpus)
速度較快,使用多核心的:
具體參數說明及使用方法請參照官網:models.ldamulticore – parallelized Latent Dirichlet Allocation
>>> from gensim import corpora, models
>>> lda = LdaMulticore(corpus, id2word=id2word, num_topics=100) # train model
>>> print(lda[doc_bow]) # get topic probability distribution for a document
>>> lda.update(corpus2) # update the LDA model with additional documents
>>> print(lda[doc_bow])
使用多進程對性能的提升:
Wall-clock performance on the English Wikipedia (2G corpus positions, 3.5M documents, 100K features, 0.54G non-zero entries in the final bag-of-words matrix), requesting 100 topics:
(Measured on this i7 server with 4 physical cores, so that optimal workers=3, one less than the number of cores.)
algorithm | training time |
---|---|
LdaMulticore(workers=1) | 2h30m |
LdaMulticore(workers=2) | 1h24m |
LdaMulticore(workers=3) | 1h6m |
oldLdaModel | 3h44m |
simply iterating over input corpus = I/O overhead | 20m |
workers的值需要比電腦的核心數小1
本文代碼使用多核心的方法。
有問題歡迎留言交流。
本文在將語料轉化爲corpus後,進行了如下操作:
tfidf = models.TfidfModel(corpus)
corpusTfidf = tfidf[corpus]
這一步是用來調整語料中不同詞的詞頻,將那些在所有文檔中都出現的高頻詞的詞頻降低,具體原理可參見阮一峯老師的系列博客:TF-IDF與餘弦相似性的應用(一):自動提取關鍵詞,我經過這一步處理後,貌似效果提升不明顯,而且這一步時間消耗較大,不建議採用。可直接將corpus作爲訓練數據傳入lda模型中。
#-*-coding:utf-8-*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import os
import codecs
from gensim.corpora import Dictionary
from gensim import corpora, models
from datetime import datetime
import platform
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s : ', level=logging.INFO)
platform_info = platform.platform().lower()
if 'windows' in platform_info:
code = 'gbk'
elif 'linux' in platform_info:
code = 'utf-8'
path = sys.path[0]
class GLDA(object):
"""docstring for GdeltGLDA"""
def __init__(self, stopfile=None):
super(GLDA, self).__init__()
if stopfile:
with codecs.open(stopfile, 'r', code) as f:
self.stopword_list = f.read().split(' ')
print ('the num of stopwords is : %s'%len(self.stopword_list))
else:
self.stopword_list = None
def lda_train(self, num_topics, datafolder, middatafolder, dictionary_path=None, corpus_path=None, iterations=5000, passes=1, workers=3):
time1 = datetime.now()
num_docs = 0
doclist = []
if not corpus_path or not dictionary_path: # 若無字典或無corpus,則讀取預處理後的docword。一般第一次運行都需要讀取,在後期調參時,可直接傳入字典與corpus路徑
for filename in os.listdir(datafolder): # 讀取datafolder下的語料
with codecs.open(datafolder+filename, 'r', code) as source_file:
for line in source_file:
num_docs += 1
if num_docs%100000==0:
print ('%s, %s'%(filename, num_docs))
#doc = [word for word in doc if word not in self.stopword_list]
doclist.append(line.split(' '))
print ('%s, %s'%(filename, num_docs))
if dictionary_path:
dictionary = corpora.Dictionary.load(dictionary_path) # 加載字典
else:
#構建詞彙統計向量並保存
dictionary = corpora.Dictionary(doclist)
dictionary.save(middatafolder + 'dictionary.dictionary')
if corpus_path:
corpus = corpora.MmCorpus(corpus_path) # 加載corpus
else:
corpus = [dictionary.doc2bow(doc) for doc in doclist]
corpora.MmCorpus.serialize(middatafolder + 'corpus.mm', corpus) # 保存corpus
tfidf = models.TfidfModel(corpus)
corpusTfidf = tfidf[corpus]
time2 = datetime.now()
lda_multi = models.ldamulticore.LdaMulticore(corpus=corpusTfidf, id2word=dictionary, num_topics=num_topics, \
iterations=iterations, workers=workers, batch=True, passes=passes) # 開始訓練
lda_multi.print_topics(num_topics, 30) # 輸出主題詞矩陣
print ('lda training time cost is : %s, all time cost is : %s '%(datetime.now()-time2, datetime.now()-time1))
#模型的保存/ 加載
lda_multi.save(middatafolder + 'lda_tfidf_%s_%s.model'%(2014, num_topics, iterations)) # 保存模型
# lda = models.ldamodel.LdaModel.load('zhwiki_lda.model') # 加載模型
# save the doc-topic-id
topic_id_file = codecs.open(middatafolder + 'topic.json', 'w', 'utf-8')
for i in range(num_docs):
topic_id = lda_multi[corpusTfidf[i]][0][0] # 取概率最大的主題作爲文本所屬主題
topic_id_file.write(str(topic_id)+ ' ')
if __name__ == '__main__':
datafolder = path + os.sep + 'docword' + os.sep # 預處理後的語料所在文件夾,函數會讀取此文件夾下的所有語料文件
middatafolder = path + os.sep + 'middata' + os.sep
dictionary_path = middatafolder + 'dictionary.dictionary' # 已處理好的字典,若無,則設置爲False
corpus_path = middatafolder + 'corpus.mm' # 對語料處理過後的corpus,若無,則設置爲False
# stopfile = path + os.sep + 'rest_stopwords.txt' # 新添加的停用詞文件
num_topics = 50
passes = 2 # 這個參數大概是將全部語料進行訓練的次數,數值越大,參數更新越多,耗時更長
iterations = 6000
workers = 3 # 相當於進程數
lda = GLDA()
lda.lda_train(num_topics, datafolder, middatafolder, dictionary_path=dictionary_path, corpus_path=corpus_path, iterations=iterations, passes=passes, workers=workers)
在訓練好模型後該如何對模型進行評價,以選取合適的參數?
可參照下一篇博客python下進行lda主題挖掘(三)——計算困惑度
以上,歡迎交流與指正。