【Paper Note】Deep Session Interest Network for Click-Through Rate Prediction論文詳解

在這裏插入圖片描述

介紹

通常來講,在用戶的行爲序列中,每個會話(Session)的行爲是相似的,在不同的會話之間差異較大,阿里的這篇論文中,根據時間間隔來定義會話,將用戶點擊行爲按照時間排序,如果兩個行爲的時間間隔大於30min則進行切分。如下圖所示:
在這裏插入圖片描述
從圖中可以觀察,第一個會話中,用戶點擊的feed爲褲子,第二個則是首飾,第三個是衣服,符合上面說的會話的特徵。DSIN就是基於此進行用戶行爲建模。
DSIN模型的幾點貢獻:

  • 會話內同質,會話間異質
  • 在DSIN中使用了self-attention、bias embedding、Bi-LSTM,模型更能捕捉不同會話間的用戶興趣偏好

模型介紹

Base Model

Base Model是一個全連接網絡。
特徵主要由三部分:user特徵、item特徵,用戶行爲特徵。user特徵主要是性別、城市等,item特徵主要是id,品牌等,用戶行爲特徵主要是用戶點擊行爲。此外,在代碼中,稠密特徵是加在模型的最後的,後面代碼中會有說明。
embedding主要將稀疏特徵編碼成稠密向量。
Loss Function爲negative log-likelihood function:
(1)L=1N(x,y)D(y log p(x)+(1y) log(1p(x)))L=-\frac{1}{N}\sum_{(x,y)\in D}{(y\ log\ p(x)+(1-y)\ log(1-p(x)))} \tag{1}

模型

模型總體架構如下:
在這裏插入圖片描述
全連接之前,主要由以下兩部分:左邊user特徵和item特徵的向量表示,生成embedding向量;右邊是對用戶行爲處理部分,該部分主要分爲四層:

(1)Session Division Layer

對用戶的歷史行爲序列進行切分,按照時間排序,時間間隔大於30min則進行切分。論文中將用戶行爲序列S切分成K個會話Q,第k個會話爲:Qk=[b1;b2; ;bi; ;bT]Q_k=[b_1;b_2;\cdots;b_i;\cdots;b_T],其中,TT爲會話長度,bib_i是會話中第i個行爲,並且,biRdb_i\in R^dQkRTdQ_k\in R^{T*d},所以QRKTdQ\in R^{K*T*d}
在這裏插入圖片描述

(2)Session Interest Extractor Layer

論文中對每個會話進行Transformer處理,可通過代碼得知:

# Session Interest Extractor Layer
    Self_Attention = Transformer(att_embedding_size,
                                 att_head_num,
                                 dropout_rate=0,
                                 use_layer_norm=False,
                                 use_positional_encoding=(not bias_encoding),  # bias_encoding=False
                                 seed=seed,
                                 supports_masking=True,
                                 blinding=True)

在Transformer過程中,進行了positional_encoding處理,也就是論文中說的Bias Embedding,定義如下:
(2)BE(k,t,c)=wkK+wtT+wcCBE_{(k,t,c)}=w^K_k+w^T_t+w^C_c \tag{2}
BERKTdBE\in R^{K * T * d},和Q的維度相同。BE(k,t,c)BE_{(k,t,c)}表示第k個session中,第t個物品的embedding向量的第c個位置的偏置項,也就是說,每個會話、會話中的每個物品有偏置項外,每個物品對應的embedding的每個位置,都加入了偏置項。所以加入偏置項後,Q變爲:
(3)Q=Q+BEQ=Q+BE \tag{3}
隨後進行Transformer處理,輸出爲用戶的興趣向量:
(4)Ik=Avg(IkQ)I_k=Avg(I^Q_k) \tag{4}
在這裏插入圖片描述

(3)Session Interest Interacting Layer

興趣交互層使用Bi-LSTM,具體的計算流程如下:
(5)it=σ(WxiIt+Whiht1+Wcict1+bi)i_t=\sigma(W_{xi}I_t+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_i) \tag{5} (6)ft=σ(WxfIt+Whfht1+Wcfct1+bf)f_t=\sigma(W_{xf}I_t+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_f) \tag{6} (7)ct=ftct1+it tanh(WxcIt+Whcht1+bc)c_t=f_tc_{t-1}+i_t\ tanh(W_{xc}I_t+W_{hc}h_{t-1}+b_c) \tag{7} (8)ot=σ(WxoIt+Whoht1+Wcoct1+bc)o_t=\sigma(W_{xo}I_t+W_{ho}h_{t-1}+W_{co}c_{t-1}+b_c) \tag{8} (9)ht=ot tanh(ct)h_t=o_t\ tanh(c_t) \tag{9}
每個時刻的hidden state爲前向傳播和反向傳播的hidden state的和。
在這裏插入圖片描述

(4)Session Interest Activating Layer

激活單元使用注意力機制:
(10)akI=exp(IkWIXI)kKexp(IkWIXI)a^I_k=\frac{exp(I_kW^IX^I)}{\sum^K_k exp(I_kW^IX^I)} \tag{10} (11)UI=kKakIIkU^I=\sum^K_k{a^I_kI_k} \tag{11} (12)akH=exp(HkWHXI)kKexp(HkWHXI)a^H_k=\frac{exp(H_kW^HX^I)}{\sum^K_k exp(H_kW^HX^I)} \tag{12} (13)UH=kKakHHkU^H=\sum^K_k{a^H_kH_k} \tag{13}
在這裏插入圖片描述

代碼

代碼部分主要介紹模型主體結構,訓練腳本不做說明。GitHub地址:https://github.com/shenweichen/DSIN

# coding: utf-8
"""
Author:
    Weichen Shen,[email protected]
Reference:
    [1] Feng Y, Lv F, Shen W, et al. Deep Session Interest Network for Click-Through Rate Prediction[J]. arXiv preprint arXiv:1905.06482, 2019.(https://arxiv.org/abs/1905.06482)
"""

from collections import OrderedDict

from tensorflow.python.keras.initializers import RandomNormal
from tensorflow.python.keras.layers import (Concatenate, Dense, Embedding,
                                            Flatten, Input)
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.regularizers import l2

from deepctr.input_embedding import (create_singlefeat_inputdict,
                               get_embedding_vec_list, get_inputs_list)
from deepctr.layers.core import DNN, PredictionLayer
from deepctr.layers.sequence import (AttentionSequencePoolingLayer, BiasEncoding,
                               BiLSTM, Transformer)
from deepctr.layers.utils import NoMask, concat_fun
from deepctr.utils import check_feature_config_dict


def DSIN(feature_dim_dict, sess_feature_list, embedding_size=8, sess_max_count=5, sess_len_max=10, bias_encoding=False,
         att_embedding_size=1, att_head_num=8, dnn_hidden_units=(200, 80), dnn_activation='sigmoid', dnn_dropout=0,
         dnn_use_bn=False, l2_reg_dnn=0, l2_reg_embedding=1e-6, init_std=0.0001, seed=1024, task='binary',
         ):
    """Instantiates the Deep Session Interest Network architecture.
    :param feature_dim_dict: dict,to indicate sparse field (**now only support sparse feature**)like {'sparse':{'field_1':4,'field_2':3,'field_3':2},'dense':[]}
    :param sess_feature_list: list,to indicate session feature sparse field (**now only support sparse feature**),must be a subset of ``feature_dim_dict["sparse"]``
    :param embedding_size: positive integer,sparse feature embedding_size.
    :param sess_max_count: positive int, to indicate the max number of sessions
    :param sess_len_max: positive int, to indicate the max length of each session
    :param bias_encoding: bool. Whether use bias encoding or postional encoding
    :param att_embedding_size: positive int, the embedding size of each attention head
    :param att_head_num: positive int, the number of attention head
    :param dnn_hidden_units: list,list of positive integer or empty list, the layer number and units in each layer of deep net
    :param dnn_activation: Activation function to use in deep net
    :param dnn_dropout: float in [0,1), the probability we will drop out a given DNN coordinate.
    :param dnn_use_bn: bool. Whether use BatchNormalization before activation or not in deep net
    :param l2_reg_dnn: float. L2 regularizer strength applied to DNN
    :param l2_reg_embedding: float. L2 regularizer strength applied to embedding vector
    :param init_std: float,to use as the initialize std of embedding vector
    :param seed: integer ,to use as random seed.
    :param task: str, ``"binary"`` for  binary logloss or  ``"regression"`` for regression loss
    :return: A Keras model instance.
    """
    check_feature_config_dict(feature_dim_dict)

    if (att_embedding_size * att_head_num != len(sess_feature_list) * embedding_size):
        raise ValueError(
            "len(session_feature_lsit) * embedding_size must equal to att_embedding_size * att_head_num ,got %d * %d != %d *%d" % (
            len(sess_feature_list), embedding_size, att_embedding_size, att_head_num))

    sparse_input, dense_input, user_behavior_input_dict, _, user_sess_length = get_input(feature_dim_dict, sess_feature_list, sess_max_count, sess_len_max)

    # 生成User Field和Item Field
    sparse_embedding_dict = {feat.name: Embedding(feat.dimension, 
                                                  embedding_size,
                                                  embeddings_initializer=RandomNormal(mean=0.0,
                                                                                      stddev=init_std, 
                                                                                      seed=seed),
                                                  embeddings_regularizer=l2(l2_reg_embedding),
                                                  name='sparse_emb_' + str(i) + '-' + feat.name,
                                                  mask_zero=(feat.name in sess_feature_list)) for i, feat in enumerate(feature_dim_dict["sparse"])}

    query_emb_list = get_embedding_vec_list(sparse_embedding_dict, 
                                            sparse_input, 
                                            feature_dim_dict["sparse"],
                                            sess_feature_list, 
                                            sess_feature_list)
    # concat User Field和Item Field
    query_emb = concat_fun(query_emb_list)

    deep_input_emb_list = get_embedding_vec_list(sparse_embedding_dict, 
                                                 sparse_input, 
                                                 feature_dim_dict["sparse"],
                                                 mask_feat_list=sess_feature_list)
    deep_input_emb = concat_fun(deep_input_emb_list)
    deep_input_emb = Flatten()(NoMask()(deep_input_emb))

    # Session Divsion Layer
    tr_input = sess_interest_division(sparse_embedding_dict,
                                      user_behavior_input_dict,
                                      feature_dim_dict['sparse'],
                                      sess_feature_list,
                                      sess_max_count,
                                      bias_encoding=bias_encoding)

    # Session Interest Extractor Layer
    Self_Attention = Transformer(att_embedding_size,
                                 att_head_num,
                                 dropout_rate=0,
                                 use_layer_norm=False,
                                 use_positional_encoding=(not bias_encoding),  # bias_encoding=False
                                 seed=seed,
                                 supports_masking=True,
                                 blinding=True)

    sess_fea = sess_interest_extractor(tr_input,
                                       sess_max_count,
                                       Self_Attention)

    # Session Interest Interacting Layer
    lstm_outputs = BiLSTM(len(sess_feature_list) * embedding_size,
                          layers=2,
                          res_layers=0,
                          dropout_rate=0.2, )(sess_fea)

    # Session Interest Activating Layer
    # 黃色Activation Unit
    interest_attention_layer = AttentionSequencePoolingLayer(att_hidden_units=(64, 16),
                                                             weight_normalization=True,
                                                             supports_masking=False)([query_emb, sess_fea, user_sess_length])
    # 藍色Activation Unit
    lstm_attention_layer = AttentionSequencePoolingLayer(att_hidden_units=(64, 16),
                                                         weight_normalization=True)([query_emb, lstm_outputs, user_sess_length])

    deep_input_emb = Concatenate()([deep_input_emb, Flatten()(interest_attention_layer), Flatten()(lstm_attention_layer)])

    # 如果有dense輸入,和上面兩部分一起concat,輸入到DNN中
    if len(dense_input) > 0:
        deep_input_emb = Concatenate()(
            [deep_input_emb] + list(dense_input.values()))

    output = DNN(dnn_hidden_units,
                 dnn_activation,
                 l2_reg_dnn,
                 dnn_dropout,
                 dnn_use_bn,
                 seed)(deep_input_emb)

    output = Dense(1, use_bias=False, activation=None)(output)
    output = PredictionLayer(task)(output)

    sess_input_list = []
    # sess_input_length_list = []
    for i in range(sess_max_count):
        sess_name = "sess_" + str(i)
        sess_input_list.extend(get_inputs_list([user_behavior_input_dict[sess_name]]))
        # sess_input_length_list.append(user_behavior_length_dict[sess_name])

    model_input_list = get_inputs_list([sparse_input, dense_input]) + sess_input_list + [user_sess_length]

    model = Model(inputs=model_input_list, outputs=output)

    return model


def get_input(feature_dim_dict, seq_feature_list, sess_max_count, seq_max_len):
    sparse_input, dense_input = create_singlefeat_inputdict(feature_dim_dict)
    user_behavior_input = {}
    for idx in range(sess_max_count):
        sess_input = OrderedDict()
        for i, feat in enumerate(seq_feature_list):
            sess_input[feat] = Input(
                shape=(seq_max_len,), name='seq_' + str(idx) + str(i) + '-' + feat)

        user_behavior_input["sess_" + str(idx)] = sess_input

    user_behavior_length = {"sess_" + str(idx): Input(shape=(1,), name='seq_length' + str(idx)) for idx in
                            range(sess_max_count)}
    user_sess_length = Input(shape=(1,), name='sess_length')

    return sparse_input, dense_input, user_behavior_input, user_behavior_length, user_sess_length


def sess_interest_division(sparse_embedding_dict, user_behavior_input_dict, sparse_fg_list, sess_feture_list,
                           sess_max_count,
                           bias_encoding=True):
    tr_input = []
    for i in range(sess_max_count):
        sess_name = "sess_" + str(i)
        keys_emb_list = get_embedding_vec_list(sparse_embedding_dict, 
                                               user_behavior_input_dict[sess_name],
                                               sparse_fg_list, 
                                               sess_feture_list, 
                                               sess_feture_list)
        # [sparse_embedding_dict[feat](user_behavior_input_dict[sess_name][feat]) for feat in
        #             sess_feture_list]
        keys_emb = concat_fun(keys_emb_list)
        tr_input.append(keys_emb)
    if bias_encoding:
        tr_input = BiasEncoding(sess_max_count)(tr_input)
    return tr_input


def sess_interest_extractor(tr_input, sess_max_count, TR):
    tr_out = []
    for i in range(sess_max_count):
        tr_out.append(TR(
            [tr_input[i], tr_input[i]]))
    sess_fea = concat_fun(tr_out, axis=1)
    return sess_fea
    
  • 更新時間:2019-07-31
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章