介紹
通常來講,在用戶的行爲序列中,每個會話(Session)的行爲是相似的,在不同的會話之間差異較大,阿里的這篇論文中,根據時間間隔來定義會話,將用戶點擊行爲按照時間排序,如果兩個行爲的時間間隔大於30min則進行切分。如下圖所示:
從圖中可以觀察,第一個會話中,用戶點擊的feed爲褲子,第二個則是首飾,第三個是衣服,符合上面說的會話的特徵。DSIN就是基於此進行用戶行爲建模。
DSIN模型的幾點貢獻:
- 會話內同質,會話間異質
- 在DSIN中使用了self-attention、bias embedding、Bi-LSTM,模型更能捕捉不同會話間的用戶興趣偏好
模型介紹
Base Model
Base Model是一個全連接網絡。
特徵主要由三部分:user特徵、item特徵,用戶行爲特徵。user特徵主要是性別、城市等,item特徵主要是id,品牌等,用戶行爲特徵主要是用戶點擊行爲。此外,在代碼中,稠密特徵是加在模型的最後的,後面代碼中會有說明。
embedding主要將稀疏特徵編碼成稠密向量。
Loss Function爲negative log-likelihood function:
模型
模型總體架構如下:
全連接之前,主要由以下兩部分:左邊user特徵和item特徵的向量表示,生成embedding向量;右邊是對用戶行爲處理部分,該部分主要分爲四層:
(1)Session Division Layer
對用戶的歷史行爲序列進行切分,按照時間排序,時間間隔大於30min則進行切分。論文中將用戶行爲序列S切分成K個會話Q,第k個會話爲:,其中,爲會話長度,是會話中第i個行爲,並且,,,所以
(2)Session Interest Extractor Layer
論文中對每個會話進行Transformer處理,可通過代碼得知:
# Session Interest Extractor Layer
Self_Attention = Transformer(att_embedding_size,
att_head_num,
dropout_rate=0,
use_layer_norm=False,
use_positional_encoding=(not bias_encoding), # bias_encoding=False
seed=seed,
supports_masking=True,
blinding=True)
在Transformer過程中,進行了positional_encoding處理,也就是論文中說的Bias Embedding,定義如下:
,和Q的維度相同。表示第k個session中,第t個物品的embedding向量的第c個位置的偏置項,也就是說,每個會話、會話中的每個物品有偏置項外,每個物品對應的embedding的每個位置,都加入了偏置項。所以加入偏置項後,Q變爲:
隨後進行Transformer處理,輸出爲用戶的興趣向量:
(3)Session Interest Interacting Layer
興趣交互層使用Bi-LSTM,具體的計算流程如下:
每個時刻的hidden state爲前向傳播和反向傳播的hidden state的和。
(4)Session Interest Activating Layer
激活單元使用注意力機制:
代碼
代碼部分主要介紹模型主體結構,訓練腳本不做說明。GitHub地址:https://github.com/shenweichen/DSIN
# coding: utf-8
"""
Author:
Weichen Shen,[email protected]
Reference:
[1] Feng Y, Lv F, Shen W, et al. Deep Session Interest Network for Click-Through Rate Prediction[J]. arXiv preprint arXiv:1905.06482, 2019.(https://arxiv.org/abs/1905.06482)
"""
from collections import OrderedDict
from tensorflow.python.keras.initializers import RandomNormal
from tensorflow.python.keras.layers import (Concatenate, Dense, Embedding,
Flatten, Input)
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.regularizers import l2
from deepctr.input_embedding import (create_singlefeat_inputdict,
get_embedding_vec_list, get_inputs_list)
from deepctr.layers.core import DNN, PredictionLayer
from deepctr.layers.sequence import (AttentionSequencePoolingLayer, BiasEncoding,
BiLSTM, Transformer)
from deepctr.layers.utils import NoMask, concat_fun
from deepctr.utils import check_feature_config_dict
def DSIN(feature_dim_dict, sess_feature_list, embedding_size=8, sess_max_count=5, sess_len_max=10, bias_encoding=False,
att_embedding_size=1, att_head_num=8, dnn_hidden_units=(200, 80), dnn_activation='sigmoid', dnn_dropout=0,
dnn_use_bn=False, l2_reg_dnn=0, l2_reg_embedding=1e-6, init_std=0.0001, seed=1024, task='binary',
):
"""Instantiates the Deep Session Interest Network architecture.
:param feature_dim_dict: dict,to indicate sparse field (**now only support sparse feature**)like {'sparse':{'field_1':4,'field_2':3,'field_3':2},'dense':[]}
:param sess_feature_list: list,to indicate session feature sparse field (**now only support sparse feature**),must be a subset of ``feature_dim_dict["sparse"]``
:param embedding_size: positive integer,sparse feature embedding_size.
:param sess_max_count: positive int, to indicate the max number of sessions
:param sess_len_max: positive int, to indicate the max length of each session
:param bias_encoding: bool. Whether use bias encoding or postional encoding
:param att_embedding_size: positive int, the embedding size of each attention head
:param att_head_num: positive int, the number of attention head
:param dnn_hidden_units: list,list of positive integer or empty list, the layer number and units in each layer of deep net
:param dnn_activation: Activation function to use in deep net
:param dnn_dropout: float in [0,1), the probability we will drop out a given DNN coordinate.
:param dnn_use_bn: bool. Whether use BatchNormalization before activation or not in deep net
:param l2_reg_dnn: float. L2 regularizer strength applied to DNN
:param l2_reg_embedding: float. L2 regularizer strength applied to embedding vector
:param init_std: float,to use as the initialize std of embedding vector
:param seed: integer ,to use as random seed.
:param task: str, ``"binary"`` for binary logloss or ``"regression"`` for regression loss
:return: A Keras model instance.
"""
check_feature_config_dict(feature_dim_dict)
if (att_embedding_size * att_head_num != len(sess_feature_list) * embedding_size):
raise ValueError(
"len(session_feature_lsit) * embedding_size must equal to att_embedding_size * att_head_num ,got %d * %d != %d *%d" % (
len(sess_feature_list), embedding_size, att_embedding_size, att_head_num))
sparse_input, dense_input, user_behavior_input_dict, _, user_sess_length = get_input(feature_dim_dict, sess_feature_list, sess_max_count, sess_len_max)
# 生成User Field和Item Field
sparse_embedding_dict = {feat.name: Embedding(feat.dimension,
embedding_size,
embeddings_initializer=RandomNormal(mean=0.0,
stddev=init_std,
seed=seed),
embeddings_regularizer=l2(l2_reg_embedding),
name='sparse_emb_' + str(i) + '-' + feat.name,
mask_zero=(feat.name in sess_feature_list)) for i, feat in enumerate(feature_dim_dict["sparse"])}
query_emb_list = get_embedding_vec_list(sparse_embedding_dict,
sparse_input,
feature_dim_dict["sparse"],
sess_feature_list,
sess_feature_list)
# concat User Field和Item Field
query_emb = concat_fun(query_emb_list)
deep_input_emb_list = get_embedding_vec_list(sparse_embedding_dict,
sparse_input,
feature_dim_dict["sparse"],
mask_feat_list=sess_feature_list)
deep_input_emb = concat_fun(deep_input_emb_list)
deep_input_emb = Flatten()(NoMask()(deep_input_emb))
# Session Divsion Layer
tr_input = sess_interest_division(sparse_embedding_dict,
user_behavior_input_dict,
feature_dim_dict['sparse'],
sess_feature_list,
sess_max_count,
bias_encoding=bias_encoding)
# Session Interest Extractor Layer
Self_Attention = Transformer(att_embedding_size,
att_head_num,
dropout_rate=0,
use_layer_norm=False,
use_positional_encoding=(not bias_encoding), # bias_encoding=False
seed=seed,
supports_masking=True,
blinding=True)
sess_fea = sess_interest_extractor(tr_input,
sess_max_count,
Self_Attention)
# Session Interest Interacting Layer
lstm_outputs = BiLSTM(len(sess_feature_list) * embedding_size,
layers=2,
res_layers=0,
dropout_rate=0.2, )(sess_fea)
# Session Interest Activating Layer
# 黃色Activation Unit
interest_attention_layer = AttentionSequencePoolingLayer(att_hidden_units=(64, 16),
weight_normalization=True,
supports_masking=False)([query_emb, sess_fea, user_sess_length])
# 藍色Activation Unit
lstm_attention_layer = AttentionSequencePoolingLayer(att_hidden_units=(64, 16),
weight_normalization=True)([query_emb, lstm_outputs, user_sess_length])
deep_input_emb = Concatenate()([deep_input_emb, Flatten()(interest_attention_layer), Flatten()(lstm_attention_layer)])
# 如果有dense輸入,和上面兩部分一起concat,輸入到DNN中
if len(dense_input) > 0:
deep_input_emb = Concatenate()(
[deep_input_emb] + list(dense_input.values()))
output = DNN(dnn_hidden_units,
dnn_activation,
l2_reg_dnn,
dnn_dropout,
dnn_use_bn,
seed)(deep_input_emb)
output = Dense(1, use_bias=False, activation=None)(output)
output = PredictionLayer(task)(output)
sess_input_list = []
# sess_input_length_list = []
for i in range(sess_max_count):
sess_name = "sess_" + str(i)
sess_input_list.extend(get_inputs_list([user_behavior_input_dict[sess_name]]))
# sess_input_length_list.append(user_behavior_length_dict[sess_name])
model_input_list = get_inputs_list([sparse_input, dense_input]) + sess_input_list + [user_sess_length]
model = Model(inputs=model_input_list, outputs=output)
return model
def get_input(feature_dim_dict, seq_feature_list, sess_max_count, seq_max_len):
sparse_input, dense_input = create_singlefeat_inputdict(feature_dim_dict)
user_behavior_input = {}
for idx in range(sess_max_count):
sess_input = OrderedDict()
for i, feat in enumerate(seq_feature_list):
sess_input[feat] = Input(
shape=(seq_max_len,), name='seq_' + str(idx) + str(i) + '-' + feat)
user_behavior_input["sess_" + str(idx)] = sess_input
user_behavior_length = {"sess_" + str(idx): Input(shape=(1,), name='seq_length' + str(idx)) for idx in
range(sess_max_count)}
user_sess_length = Input(shape=(1,), name='sess_length')
return sparse_input, dense_input, user_behavior_input, user_behavior_length, user_sess_length
def sess_interest_division(sparse_embedding_dict, user_behavior_input_dict, sparse_fg_list, sess_feture_list,
sess_max_count,
bias_encoding=True):
tr_input = []
for i in range(sess_max_count):
sess_name = "sess_" + str(i)
keys_emb_list = get_embedding_vec_list(sparse_embedding_dict,
user_behavior_input_dict[sess_name],
sparse_fg_list,
sess_feture_list,
sess_feture_list)
# [sparse_embedding_dict[feat](user_behavior_input_dict[sess_name][feat]) for feat in
# sess_feture_list]
keys_emb = concat_fun(keys_emb_list)
tr_input.append(keys_emb)
if bias_encoding:
tr_input = BiasEncoding(sess_max_count)(tr_input)
return tr_input
def sess_interest_extractor(tr_input, sess_max_count, TR):
tr_out = []
for i in range(sess_max_count):
tr_out.append(TR(
[tr_input[i], tr_input[i]]))
sess_fea = concat_fun(tr_out, axis=1)
return sess_fea
- 更新時間:2019-07-31