簡單構建新聞數據對股票的情緒因子(大盤因子)

簡單思路描述:根據前一天的新聞數據,預測後一天大盤漲跌,漲爲1,跌爲0.
構建數據集:

import tushare as ts
ts.set_token(' ')
#ts.set_token('your token here')
pro = ts.pro_api()
df1 = pro.cctv_news(date='20190916')#0
df2 = pro.cctv_news(date='20190917')#1

此步驟爲測試版,真正使用需考慮星期五星期六星期天的新聞數據,以及節假日數據合併。並且標籤使用大盤數據確定而不是手工敲定。
完整測試代碼:

all = df1.append(df2, ignore_index=True)
all['words'] = all['content'].apply(lambda s: list(jieba.cut(s))) #調用結巴分詞
import numpy as np
import pandas as pd
import jieba

maxlen = 100 #截斷詞數
min_count = 1 #出現次數少於該值的詞扔掉。這是最簡單的降維方法

content = []
for i in all_['words']:
	content.extend(i)

abc = pd.Series(content).value_counts()
abc = abc[abc >= min_count]
abc[:] = list(range(1, len(abc)+1))
abc[''] = 0 #添加空字符串用來補全
word_set = set(abc.index)

def doc2num(s, maxlen): 
    s = [i for i in s if i in word_set]
    s = s[:maxlen] + ['']*max(0, maxlen-len(s))
    return list(abc[s])

all['doc2num'] = all['words'].apply(lambda s: doc2num(s, maxlen))

#手動打亂數據
idx = list(range(len(all_)))
np.random.shuffle(idx)
all = all.loc[idx]
model = Sequential()
model.add(Embedding(len(abc), 256, input_length=maxlen))
model.add(Dropout(0.5))
model.add(Dense(128))
#model.add(Bidirectional(LSTM(128))
model.add(Bidirectional(LSTM(128, return_sequences=True),merge_mode='concat'))
#model.add(Bidirectional(LSTM(16))
#model.add(LSTM(128)) 
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

batch_size = 128
train_num = 15000

model.fit(x, y, batch_size = batch_size, nb_epoch=10)
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_8 (Embedding)      (None, 100, 256)          598784    
_________________________________________________________________
dropout_12 (Dropout)         (None, 100, 256)          0         
_________________________________________________________________
dense_9 (Dense)              (None, 100, 128)          32896     
_________________________________________________________________
bidirectional_4 (Bidirection (None, 100, 256)          263168    
_________________________________________________________________
dropout_13 (Dropout)         (None, 100, 256)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 25600)             0         
_________________________________________________________________
dense_10 (Dense)             (None, 64)                1638464   
_________________________________________________________________
dense_11 (Dense)             (None, 1)                 65        
_________________________________________________________________
activation_8 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,533,377
Trainable params: 2,533,377
Non-trainable params: 0
__________________________

之所以構建深度較大,是爲了方便後續使用,如果簡單使用可以直接lstm層就行,把bilstm去掉。

後續改進:情緒因子構建爲0,1,-1三個,分別爲大盤跌幅位於0.2到-0.2,高於0.2,低於-0.2等方面。並且可以在分詞中,加入詞向量或者tfidf權重等方法。

後續文章將陸續更新:事件主體抽取,金融詞語發現等nlp文章

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章