python訓練work2vec詞向量實例(python gensim)

前期工作可參閱:

1.python work2vec詞向量訓練可參考 https://blog.csdn.net/shuihupo/article/details/85156544詞向量訓練

2.word2vec詞向量中文語料處理(python gensim word2vec總結) 可參考 https://mp.csdn.net/postedit/85162237彙總數種語料加載方式。
之前的博客講的比較詳細,這篇博客則直接上例子了,有疑問的翻看之前的語料處理和詞向量訓練。

python訓練work2vec詞向量實例

word2vec中文語料處理及模型訓練實踐

實踐部分代碼改編自鏈接)原始小說語料下載《人民的名義》

將代碼中路徑改爲小說文本存放路徑

#!/Mypython python3.5
# -*- coding: utf-8 -*-
# @Time    : 2018/12/21 16:49
# @Author  : LinYimeng
# @Site   : 
# @File   : word2vec_test.py
# @Software: PyCharm
import multiprocessing
import jieba
import jieba.analyse
from gensim.test.utils import common_texts, get_tmpfile
from gensim.models import Word2Vec
with open('C:\\Users\Administrator\Desktop\\in_the_name_of_people\in_the_name_of_people.txt',encoding='utf-8') as f:
    document = f.read()
    document_cut = jieba.cut(document)
    result = ' '.join(document_cut)
    print("type",type(result))
    with open('./in_the_name_of_people_segment.txt', 'w',encoding="utf-8") as f2:
        f2.write(result)

# import logging
import os
from gensim.models import word2vec

# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

sentences = word2vec.LineSentence('./in_the_name_of_people_segment.txt')
path = get_tmpfile("word2vec.model") #創建臨時文件
model = Word2Vec(sentences, size=200, window=5, min_count=1,
                 workers=multiprocessing.cpu_count())
path = get_tmpfile("w2v_model.bin") #創建臨時文件
path1 = get_tmpfile("w2v_vector.bin") #創建臨時文件
model.save("w2v_model.bin")
model.wv.save("w2v_vector.bin")
for key in model.wv.similar_by_word('人民', topn =10):
    print(key)
#for key in model.similar_by_word('人民',topn=10):
#        print(key)

('錢', 0.9998364448547363)
('但', 0.9998363256454468)
('倒', 0.9998291730880737)
('以後', 0.99982750415802)
('回來', 0.9998223185539246)
('工作', 0.999817967414856)
('趙家', 0.9998155236244202)
('趙瑞龍', 0.9998130798339844)
('打', 0.9998125433921814)
('一次', 0.9998101592063904)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章