詞性標註-去除停用詞
詞性標註就是對分詞後的詞性進行標識,通常分詞後其詞性也就直接輸出了,而詞性標註的應用就是可以通過詞性來進行過濾(去除助詞停用詞等),從而得到更有效的文本。
方法是首先自定義字典–確定不想要的詞性,第二步是把文件讀進來後,先進行分詞,根據分詞的詞語的詞性對照詞典中的詞進行排除並重新拼接組合。
extract_data.py:
from tokenizer import seg_sentences
fp=open("text.txt",'r',encoding='utf8')
fout=open("out.txt",'w',encoding='utf8')
for line in fp:
line=line.strip()
if len(line)>0:
fout.write(' '.join(seg_sentences(line))+"\n")
fout.close()
tokenizer.py:
import os,re
from jpype import *
startJVM(getDefaultJVMPath(),r"-Djava.class.path=E:\NLP\hanlp\hanlp-1.5.0.jar;E:\NLP\hanlp",
"-Xms1g",
"-Xmx1g")
Tokenizer = JClass('com.hankcs.hanlp.tokenizer.StandardTokenizer')
drop_pos_set=set(['xu','xx','y','yg','wh','wky','wkz','wp','ws','wyy','wyz','wb','u','ud','ude1','ude2','ude3','udeng','udh','p','rr'])
def to_string(sentence,return_generator=False):
# 遍歷每行的文本,將其切分爲詞語和詞性,並作爲元組返回
if return_generator:
# 通過.toString()方法描述迭代器內容
return (word_pos_item.toString().split('/') for word_pos_item in Tokenizer.segment(sentence))
else:
return [(word_pos_item.toString().split('/')[0],word_pos_item.toString().split('/')[1]) for word_pos_item in Tokenizer.segment(sentence)]
def seg_sentences(sentence,with_filter=True,return_generator=False):
segs=to_string(sentence,return_generator=return_generator)
# 使用with_filter來標識是否要刪去常用詞
if with_filter:
# 如果不在自定義去除的詞典中,則保留
g = [word_pos_pair[0] for word_pos_pair in segs if len(word_pos_pair)==2 and word_pos_pair[0]!=' ' and word_pos_pair[1] not in drop_pos_set]
else:
g = [word_pos_pair[0] for word_pos_pair in segs if len(word_pos_pair)==2 and word_pos_pair[0]!=' ']
return iter(g) if return_generator else g
處理結果: