1 - Factorization Machines ( Steffen Rendle, 2010 )

ABSTRACT

In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models.

由上可知,FM模型是一種新的模型,其綜合了SVM模型與factorization模型的優點。

factorization models的基本形式爲:f(x)=q1(x)q2(x)q3(x)...qt(x)

Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail.

在特徵較爲稀疏的情況下,如推薦系統等,SVM模型不再適用
爲了解決該問題,FM模型引入factorized parameters,該參數用於對交叉特徵進行學習

We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings.

FM模型避免了SVM模型在訓練時的弊端,其模型複雜度爲O(n)

On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task.

factorization models的兩個缺點:

  • 對輸入的數據有限制,如推薦系統中,輸入數據的形式爲:uid, sid, score
  • 模型及其所採用的優化方法需要根據具體的task進行定製;

We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models.

I. INTRODUCTION

Support Vector Machines are one of the most popular predictors in machine learning and data mining. Nevertheless in settings like collaborative filtering, SVMs play no important role and the best models are either direct applications of standard matrix/ tensor factorization models like PARAFAC [1] or specialized models using factorized parameters [2], [3], [4].
In this paper, we show that the only reason why standard SVM predictors are not successful in these tasks is that they cannot learn reliable parameters (‘hyperplanes’) in complex (non-linear) kernel spaces under very sparse data.

重點關注論文中,如何證明爲什麼non-linear SVM不適用於稀疏數據集。

On the other hand, the drawback of tensor factorization models and even more for specialized factorization models is that
(1) they are not applicable to standard prediction data (e.g. a real valued feature vector in n .)
(2) that specialized models are usually derived individually for a specific task requiring effort in modeling and design of a learning algorithm.

In this paper, we introduce a new predictor, the Factorization Machine (FM), that is a general predictor like SVMs but is also able to estimate reliable parameters under very high sparsity.
The factorization machine models all nested variable interactions(comparable to a polynomial kernel in SVM), but uses a factorized parametrization instead of a dense parametrization like in SVMs.

polynomial kernel:多項式核函數,形式爲Kn(X,X)=(1+γXTX)n,γ>0

polynomial kernel-SVM模型,其模型參數採用dense parametrization,而FM模型參數採用factorized parametrization

We show that the model equation of FMs can be computed in linear time and that it depends only on a linear number of parameters. This allows direct optimization and storage of model parameters without the need of storing any training data (e.g. support vectors) for prediction. In contrast to this, non-linear SVMs are usually optimized in the dual form and computing a prediction (the model equation) depends on parts of the training data (the support vectors).
We also show that FMs subsume many of the most successful approaches for the task of collaborative filtering including biased MF, SVD++ [2], PITF [3] and FPMC [4].

In total, the advantages of our proposed FM are:
1) FMs allow parameter estimation under very sparse data where SVMs fail.
2) FMs have linear complexity, can be optimized in the primal and do not rely on support vectors like SVMs. We show that FMs scale to large datasets like Netflix with 100 millions of training instances.
3) FMs are a general predictor that can work with any real valued feature vector. In contrast to this, other state-of-the-art factorization models work only on very restricted input data. We will show that just by defining the feature vectors of the input data, FMs can mimic state-of-the-art models like biased MF, SVD++, PITF or FPMC.

II. PREDICTION UNDER SPARSITY

The most common prediction task is to estimate a function y:nT from a real valued feature vector xn to a target domain T (e.g. T= for regression or T={+,} for classification). In supervised settings, it is assumed that there is a training dataset D={(x(1),y(1)),(x(2),y(2)),...} of examples for the target function y given.We also investigate the ranking task where the function y with target T= can be used to score feature vectors x and sort them according to their score. Scoring functions can be learned with pairwise training data [5], where a feature tuple (x(A),x(B))D means that x(A) should be ranked higher than x(B) . As the pairwise ranking relation is antisymmetric, it is sufficient to use only positive training instances.

In this paper, we deal with problems where x is highly sparse, i.e. almost all of the elements xi of a vector x are zero. Let m(x) be the number of non-zero elements in the feature vector x and mD be the average number of non-zero elements m(x) of all vectors xD . Huge sparsity (mDn ) appears in many real-world data like feature vectors of event transactions (e.g. purchases in recommender systems) or text analysis (e.g. bag of word approach). One reason for huge sparsity is that the underlying problem deals with large categorical variable domains.

Example 1 Assume we have the transaction data of a movie review system. The system records which user uU rates a movie (item) iI at a certain time tR with a rating r{1,2,3,4,5} . Let the users U and items I be:

U={Alice(A),Bob(B),Charlie(C),...}I={Titanic(TI),Notting Hill(NH),Star Wars(SW),Star Trek(ST),...}

Let the observed data S be:
S={(A,TI,20101,5),(A,NH,20102,3),(A,SW,20104,1),(B,SW,20095,4),(B,ST,20098,5),(C,TI,20099,1),(C,SW,200912,5)}

An example for a prediction task using this data, is to estimate a function yˆ that predicts the rating behavior of a user for an item at a certain point in time.

Figure 1 shows one example of how feature vectors can be created from S for this task. Here, first there are |U| binary indicator variables (blue) that represent the active user of a transaction – there is always exactly one active user in each transaction (u,i,t,r)S , e.g. user Alice in the first one (x(1)A=1) . The next |I| binary indicator variables (red) hold the active item – again there is always exactly one active item (e.g. x(1)TI=1 ). The feature vectors in figure 1 also contain indicator variables (yellow) for all the other movies the user has ever rated. For each user, the variables are normalized such that they sum up to 1. E.g. Alice has rated Titanic, Notting Hill and Star Wars. Additionally the example contains a variable (green) holding the time in months starting from January, 2009. And finally the vector contains information of the last movie (brown) the user has rated before (s)he rated the active one – e.g. for x(2) , Alice rated Titanic before she rated Notting Hill. In section V, we show how factorization machines using such feature vectors as input data are related to specialized state-of-the-art factorization models.

Fig. 1

We will use this example data throughout the paper for illustration. However please note that FMs are general predictors like SVMs and thus are applicable to any real valued feature vectors and are not restricted to recommender systems.

III. FACTORIZATION MACHINES (FM)

In this section, we introduce factorization machines. We discuss the model equation in detail and show shortly how to apply FMs to several prediction tasks.

A. Factorization Machine Model

1) Model Equation: The model equation for a factorization machine of degree d=2 is defined as:

yˆ(x):=w0+i=1nwixi+i=1nj=i+1nvi,vjxixj

where the model parameters that have to be estimated are:
w0,wn,Vn×k

A row vi within V describes the i -th variable with k factors. k+0 is a hyperparameter that defines the dimensionality of the factorization.

A 2-way FM (degree d=2 ) captures all single and pairwise interactions between variables:

  • w0 is the global bias.
  • wi models the strength of the i -th variable.
  • wˆi,j:=vi,vj models the interaction between the i -th and j -th variable. Instead of using an own model parameter wi,j for each interaction, the FM models the interaction by factorizing it. We will see later on, that this is the key point which allows high quality parameter estimates of higher order interactions (d2 ) under sparsity.

2) Expressiveness: It is well known that for any positive definite matrix W , there exists a matrix V such that W=V·Vt provided that k is sufficiently large. This shows that a FM can express any interaction matrix W if k is chosen large enough. Nevertheless in sparse settings, typically a small k should be chosen because there is not enough data to estimate complex interactions W . Restricting k – and thus the expressiveness of the FM – leads to better generalization and thus improved interaction matrices under sparsity.

3) Parameter Estimation Under Sparsity: In sparse settings, there is usually not enough data to estimate interactions between variables directly and independently. Factorization machines can estimate interactions even in these settings well because they break the independence of the interaction parameters by factorizing them. In general this means that the data for one interaction helps also to estimate the parameters for related interactions. We will make the idea more clear with an example from the data in figure 1.Assume we want to estimate the interaction between Alice (A) and Star Trek (ST) for predicting the target y (here the rating). Obviously, there is no case x in the training data where both variables xA and xST are non-zero and thus a direct estimate would lead to no interaction (wA,ST=0 ). But with the factorized interaction parameters vA,vST we can estimate the interaction even in this case. First of all, Bob and Charlie will have similar factor vectors vB and vC because both have similar interactions with Star Wars (vSW ) for predicting ratings – i.e. vB,vSW and vC,vSW have to be similar. Alice (vA ) will have a different factor vector from Charlie (vC ) because she has different interactions with the factors of Titanic and Star Wars for predicting ratings. Next, the factor vectors of Star Trek are likely to be similar to the one of Star Wars because Bob has similar interactions for both movies for predicting y . In total, this means that the dot product (i.e. the interaction) of the factor vectors of Alice and Star Trek will be similar to the one of Alice and Star Wars – which also makes intuitively sense.

4) Computation: Next, we show how to make FMs applicable from a computational point of view. The complexity of straight forward computation of eq. (1) is in O(kn2) because all pairwise interactions have to be computed. But with reformulating it drops to linear runtime.

Lemma 3.1: The model equation of a factorization machine (eq. (1)) can be computed in linear time O(kn) .
Proof: Due to the factorization of the pairwise interactions, there is no model parameter that directly depends on two variables (e.g. a parameter with an index (i,j) ). So the pairwise interactions can be reformulated:

i=1nj=i+1nvi,vjxixj=12i=1nj=1nvi,vjxixj12i=1nvi,vjxixj=12i=1nj=1nf=1kvi,fvj,fxixji=1nf=1kvi,fvi,fxixj=12f=1k(i=1nvi,fxi)j=1nvj,fxji=1nv2i,fx2i=12f=1k(i=1nvi,fxi)2i=1nv2i,fx2i

This equation has only linear complexity in both k and n – i.e. its computation is in O(kn) .

Moreover, under sparsity most of the elements in x are 0 (i.e. m(x) is small) and thus, the sums have only to be computed over the non-zero elements. Thus in sparse applications, the computation of the factorization machine is in O(kmD) – e.g. mD=2 for typical recommender systems like MF approaches (see section V-A).

B. Factorization Machines as Predictors

FM can be applied to a variety of prediction tasks. Among them are:

  • Regression: yˆ(x) can be used directly as the predictor and the optimization criterion is e.g. the minimal least square error on D.
  • Binaryclassification: the sign of yˆ(x) is used and the parameters are optimized for hinge loss or logit loss.
  • Ranking: the vectors x are ordered by the score of yˆ(x) and optimization is done over pairs of instance vectors (x(a),x(b))D with a pairwise classification loss (e.g. like in [5]).

In all these cases, regularization terms like L2 are usually added to the optimization objective to prevent overfitting.

C. Learning Factorization Machines

As we have shown, FMs have a closed model equation that can be computed in linear time. Thus, the model parameters (w0 , w and V ) of FMs can be learned efficiently by gradient descent methods – e.g. stochastic gradient descent (SGD) – for a variety of losses, among them are square, logit or hinge loss. The gradient of the FM model is:

θyˆ(x)=1,xi,xij=1nvj,fxjvi,fx2i,if θ is w0if θ is wiif θ is vi,f

The sum nj=1vj,fxj is independent of i and thus can be precomputed (e.g. when computing yˆ(x) ). In general, each gradient can be computed in constant time O(1) . And all parameter updates for a case (x,y) can be done in O(kn) – or O(km(x)) under sparsity.

We provide a generic implementation, LIBFM, that uses SGD and supports both element-wise and pairwise losses.

D. d-way Factorization Machine

The 2-way FM described so far can easily be generalized to a d-way FM:

yˆ(x):=w0+i=1nwixi+l=2di1=1n...il=il1+1nj=1lxijf=1klj=1lvij,f(l)

where the interaction parameters for the l -th interaction are factorized by the PARAFAC model [1] with the model pa- rameters:

V(l)n×kl,kl+0

The straight-forward complexity for computing eq. (5) is O(kdnd) . But with the same arguments as in lemma 3.1, one can show that it can be computed in linear time.

E. Summary

FMs model all possible interactions between values in the feature vector x using factorized interactions instead of full parametrized ones. This has two main advantages:

  1. The interactions between values can be estimated even under high sparsity. Especially, it is possible to general- ize to unobserved interactions.
  2. The number of parameters as well as the time for prediction and learning is linear. This makes direct optimization using SGD feasible and allows optimizing against a variety of loss functions.

In the remainder of this paper, we will show the relationships between factorization machines and support vector machines as well as matrix, tensor and specialized factorization models.

IV. FMs VS. SVMs

A. SVM model

The model equation of an SVM [6] can be expressed as the dot product between the transformed input x and model parameters w:yˆ(x)=ϕ(x),w , where ϕ is a mapping from the feature space n into a more complex space F . The mapping ϕ is related to the kernel with:

K:n×n,  K(x,z)=ϕ(x),ϕ(z)

In the following, we discuss the relationships of FMs and SVMs by analyzing the primal form of the SVMs.

In practice, SVMs are solved in the dual form and the mapping φ is not performed explicitly. Nevertheless, the primal and dual have the same solution (optimum), so all our arguments about the primal hold also for the dual form.

1) Linear kernel: The most simple kernel is the linear kernel: Kl(x,z):=1+x,z , which corresponds to the mapping ϕ(x):=(1,x1,...,xn) . And thus the model equation of a linear SVM can be rewritten as:

yˆ(x)=w0+i=1nwixi,     w0,   wn

It is obvious that a linear SVM (eq. (7)) is identical to a FM of degree d = 1 (eq. (5)).

2) Polynomial kernel: The polynomial kernel allows the SVM to model higher interactions between variables. It is defined as K(x,z):=(x,z+1)d . E.g. for d=2 this corresponds to the following mapping:

ϕ(x):=(1,2x1,...,2xn,x21,...,x2n,2x1x2,...,2x1xn,2x2x3,...,2xn1xn)

And so, the model equation for polynomial SVMs can be rewritten as:

yˆ(x)=w0+2i=1nwixi+i=1nw(2)(i,i)x2i+2i=1nj=i+1nw(2)(i,j)xixj

where the model parameters are:

w0,   wn,    w(2)n×n(symmetricmatrix)

Comparing a polynomial SVM (eq. (9)) to a FM (eq. (1)), one can see that both model all nested interactions up to degree d=2 . The main difference between SVMs and FMs is the parametrization: all interaction parameters wi,j of SVMs are completely independent, e.g. wi,j and wi,l . In contrast to this the interaction parameters of FMs are factorized and thus vi,vj and vi,vl depend on each other as they overlap and share parameters (here vi ).

B. Parameter Estimation Under Sparsity

In the following, we will show why linear and polynomial SVMs fail for very sparse problems. We show this for the example of collaborative filtering with user and item indicator variables(see the first two groups (blue and red) in the example of figure 1). Here, the feature vectors are sparse and only two elements are non-zero (the active user u and active item i ).

1) Linear SVM: For this kind of data x, the linear SVM model (eq. (7)) is equivalent to:

yˆ(x)=w0+wu+wi

Because xj=1 if and only if j=u or j=i . This model corresponds to one of the most basic collaborative filtering models where only the user and item biases are captured. As this model is very simple, the parameters can be estimated well even under sparsity. However, the empirical prediction quality typically is low (see figure 2).

Fig. 2

2) Polynomial SVM: With the polynomial kernel, the SVM can capture higher-order interactions (here between users and items). In our sparse case with m(x)=2 , the model equation for SVMs is equivalent to:

yˆ(x)=w0+2(wu+wi)+w(2)(u,u)+w(2)(i,i)+2w(2)(u,i)

First of all, wu and w(2)(u,u) express the same – i.e. one can drop one of them (e.g. w(2)(u,u) ). Now the model equation is the same as for the linear case but with an additional user- item interaction w(2)(u,i) . In typical collaborative filtering problems, for each interaction parameter w(2)(u,i) there is at most one observation (u,i) in the training data and for cases (u,i) in the test data there are usually no observations at all in the training data. For example in figure 1 there is just one observation for the interaction (Alice, Titanic) and non for the interaction (Alice, Star Trek). That means the maximum margin solution for the interaction parameters w(2)(u,i) for all test cases (u,i) are 0 (e.g. w(2)(A,ST)=0 ). And thus the polynomial SVM can make no use of any 2-way interaction for predicting test examples; so the polynomial SVM only relies on the user and item biases and cannot provide better estimations than a linear SVM.

For SVMs, estimating higher-order interactions is not only an issue in CF but in all scenarios where the data is hugely sparse. Because for a reliable estimate of the parameter w(2)i,j of a pairwise interaction (i,j) , there must be ‘enough’ cases xD where xi0xj0 . As soon as either xi=0 or xj=0 , the case x cannot be used for estimating the parameter w(2)i,j . To summarize, if the data is too sparse, i.e. there are too few or even no cases for (i,j) , SVMs are likely to fail.

C. Summary

  1. The dense parametrization of SVMs requires direct observations for the interactions which is often not given in sparse settings. Parameters of FMs can be estimated well even under sparsity (see section III-A3).
  2. FMs can be directly learned in the primal. Non-linear SVMs are usually learned in the dual.
  3. The model equation of FMs is independent of the training data. Prediction with SVMs depends on parts of the training data (the support vectors).

V. FMs VS. OTHER FACTORIZATION MODELs

There is a variety of factorization models, ranging from standard models for m-ary relations over categorical variables (e.g. MF, PARAFAC) to specialized models for specific data and tasks (e.g. SVD++, PITF, FPMC). Next, we show that FMs can mimic many of these models just by using the right input data (e.g. feature vector x ).

A. Matrix and Tensor Factorization

Matrix factorization (MF) is one of the most studied factorization models (e.g. [7], [8], [2]). It factorizes a relationship between two categorical variables (e.g. U and I ). The standard approach to deal with categorical variables is to define binary indicator variables for each level of U and I (e.g. see fig. 1, first (blue) and second (red) group):

To shorten notation, we address elements in x (e.g. xj ) and the parameters both by numbers (e.g. j1,...,n ) and categorical levels (e.g. j(UI) ). That means we implicitly assume a bijective mapping from numbers to categorical levels.

n:=UI,       xj:=δ(j=ij=u)

A FM using this feature vector x is identical to the matrix factorization model because xj is only non-zero for u and i , so all other biases and interactions drop:

yˆ(x)=w0+wu+wi+vu,vi

With the same argument, one can see that for problems with more than two categorical variables, FMs includes a nested parallel factor analysis model (PARAFAC).

B. SVD++

For the task of rating prediction (i.e. regression), Koren improves the matrix factorization model to the SVD++ model. A FM can mimic this model by using the following input data x (like in the first three groups of figure 1):

n:=UIL,       xj:=1,1Nu,0,if  j=ij=uif jNuelse

where Nu is the set of all movies the user has ever rated. A FM (d=2 ) would behave the following using this data:

To distinguish elements in Nu from elements in I, they are transformed with any bijective function ω:IL into a space L with LI= .

yˆ(x)=w0+wu+wi++vu,vi+1NulNuvi,vlsvd+++1NulNuwl+vu,vl+1NulNu,l>lvl,vl

where the first part is exactly the same as the SVD++ model. But the FM contains also some additional interactions between users and movies Nu as well as basic effects for the movies Nu and interactions between pairs of movies in Nu .

C. PITF for Tag Recommendation

The problem of tag prediction is defined as ranking tags for a given user and item combination. That means there are three categorical domains involved: users U , items I and tags T . In the ECML/PKDD Discovery Challenge about tag recom- mendation, a model based on factorizing pairwise interactions (PITF) has achieved the best score. We will show how a FM can mimic this model. A factorization machine with binary indicator variables for the active user u , item i and tag t results in the following model:

n:=UIT,       xj:=δ(j=ij=uj=t)

yˆ(x)=w0+wu+wi+wt+vu,vi+vu,vt+vi,vt

As this model is used for ranking between two tags tA , tB within the same user/item combination (u,i) , both the optimization and the prediction always work on differences between scores for the cases (u,i,tA) and (u,i,tB) . Thus with optimization for pairwise ranking, the FM model is equivalent to:

yˆ(x)=wt+vu,vt+vi,vt

Now the original PITF model and the FM model with binary indicators (eq. (14)) are almost identical. The only difference is that (i) the FM model has a bias term wt for t and (ii) the factorization parameters for the tags (vt ) between the (u,t) and (i,t) interaction are shared for the FM model but individual for the original PITF model. Besides this theoretical analysis, figure 3 shows empirically that both models also achieve comparable prediction quality for this task.

fig. 3

D. Factorized Personalized Markov Chains (FPMC)

The FPMC model tries to rank products in an online shop based on the last purchases (at time t1 ) of the user u .
Again just by feature generation, a factorization machine (d=2 ) behaves similarly:

n:=UIL,       xj:=1,1But1,0,if  j=ij=uif jBut1else

where But1L is the set (‘basket’) of all items a user u has purchased at time t . Then:

yˆ(x)=w0+wu+wi+vu,vi+1But1lBut1vl,vi+1But1lBut1wl+vl,vu+1But1lBut1,l>lvl,vl

Like for tag recommendation this model is used and optimized for ranking (here ranking items i ) and thus only score differences between (u,iA,t) and (u,iB,t) are used in the prediction and optimization criterion. Thus, all additive terms that do not depend on i vanish and the FM model equation is equivalent to:

yˆ(x)=wi+vu,vi+1But1lBut1vl,vi

Now one can see that the original FPMC model and the FM model are almost identical and differ only in the additional item bias wi and the sharing of factorization parameters of the FM model for the items in both the (u,i) and (i,l) interaction.

E. Summary

  1. Standard factorization models like PARAFAC or MF are not general prediction models like factorization machines. Instead they require that the feature vector is partitioned in m parts and that in each part exactly one element is 1 and the rest 0.
  2. There are many proposals for specialized factorization models designed for a single task. We have shown that factorization machines can mimic many of the most suc- cessful factorization models (including MF, PARAFAC, SVD++, PITF, FPMC) just by feature extraction which makes FM easily applicable in practice.

VI. CONCLUSION AND FUTURE WORK

In this paper, we have introduced factorization machines. FMs bring together the generality of SVMs with the benefits of factorization models. In contrast to SVMs, (1) FMs are able to estimate parameters under huge sparsity, (2) the model equation is linear and depends only on the model parameters and thus (3) they can be optimized directly in the primal. The expressiveness of FMs is comparable to the one of polynomial SVMs. In contrast to tensor factorization models like PARAFAC, FMs are a general predictor that can handle any real valued vector. Moreover, simply by using the right indicators in the input feature vector, FMs are identical or very similar to many of the specialized state-of-the-art models that are applicable only for a specific task, among them are biased MF, SVD++, PITF and FPMC.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章