網易雲課堂個性化推薦實踐與思考

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"作者/","attrs":{}},{"type":"text","text":" 韓虹瑩 ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"編輯/","attrs":{}},{"type":"text","text":" Ein","attrs":{}}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"從人和信息的博弈談推薦系統緣起","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先談談我理解的推薦系統。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果說推薦系統的定義是什麼,每本書每篇文章說的都不太一樣,協同過濾1992年就已經有了,三十年裏無數大佬分析了個性化推薦的緣起和意義,世界已經不需要多一個人的見解。但是,當所有人都說一件事情是正確的時候,我們也要想清楚","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"它爲什麼是正確的","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果你問我推薦系統是什麼,我會告訴你,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"是信息到人的精準分發","attrs":{}},{"type":"text","text":"。那麼爲什麼在這個時代推薦系統才應運而生?古人不會需要信息精準分發,車馬信息都很慢,古人學富五車不過現在一個書包的信息量;唯有現在人才需要信息精準分發,信息太多時間太少,亂花漸欲迷人眼,所以我們需要一個智能的系統,幫助你過來過濾信息,所以推薦系統是人和信息的橋樑。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當然,正如羅馬不是一天建成的一樣,在互聯網上搭個橋也是要演進的,最開始是個小木橋——門戶網站,用分類導航分發了信息;後來演化到了石板橋——搜索引擎,人可以更精準的找信息;逐步的信息太多了,要變成信息找人,在這個過程中,無論是信息的消費者,還是信息的生產者,都遇到了不曾預見的困難,信息消費者找不到信息了,信息生產者無法讓自己的信息展現在消費者眼前,有痛點就有需求,有需求就有產品,於是推薦系統作爲一個產品,恰到好處又必然的到來。凱文凱利在《必然》裏,把這個趨勢稱爲“","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"過濾","attrs":{}},{"type":"text","text":"”:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"進行過濾是必然的,因爲我們在不停地製造新東西。而在我們將要製造的新東西中,首要的一點就是創造新的方式來過濾信息和個性化定製,以突顯我們之間的差異。","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"人如何和信息相處,推薦系統既不是起點,恐怕也不會是終局,但它已經是當前人們對於處理信息所能做的最好的實踐了。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"推薦系統要如何滿足需求","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"推薦系統應該單獨作爲一個產品來看,他是一個什麼產品呢?作爲一個加工信息的產品,它一定要滿足信息供需兩端的需求,纔有價值。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以作爲一個推薦系統,要把自己定義在一個","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"中間方的位置","attrs":{}},{"type":"text","text":",可以說 C 端用戶和產品經理都是你的用戶,兩端的需求都需要被滿足,所以既需要你想技術方案,還需要你去想,你怎麼更好的滿足兩端的需求,用戶只需要你精準的幫他找到信息。而對於產品方,需要挖掘想通過推薦系統獲得什麼。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於用戶端(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"信息需求端","attrs":{}},{"type":"text","text":"),最迫切的需求是如何幫我精準的找到我需要的信息。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於公司端(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"信息供應端","attrs":{}},{"type":"text","text":"),是爲了滿足一些商業化的需求,比如吸引用戶,增強用戶黏性,提高用戶轉化率,比如資訊平臺,短視頻平臺,信息流平臺希望提升用戶活躍度,延長用戶停留時間,電商平臺希望提高用戶購買轉化率。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"推薦系統常規架構","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b8/b8ed2a2c14469f19bc1e6738f1abd3f6.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從上圖來看,一個完整的推薦系統包括數據部分和模型部分,數據部分主要依賴大數據離線或在線處理平臺,主要完成的工作包括數據的收集和 ETL 處理,生成推薦模型所需要的特徵數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"推薦系統模型部分是主體,這部分要在提供推薦模型服務之前,完成模型的訓練,然後對輸入數據進行處理,通過不同的召回或排序策略,生成最後的輸出結果。一個常規的工業級推薦系統,在模型部分主要包括召回層,過濾層,排序層,也可根據業務需要判斷是否需要補充策略與算法層。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"1. \"召回層\"","attrs":{}},{"type":"text","text":"一般利用高效的召回規則、算法或簡單的模型,快速從海量的候選集中召回用戶可能感興趣的物品。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. \"過濾層\"","attrs":{}},{"type":"text","text":"一般根據特定場景業務需求,對召回的數據進行過濾。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. \"排序層\"","attrs":{}},{"type":"text","text":"利用排序模型對初篩的候選集進行精排序。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"4. \"補充策略與算法層\"","attrs":{}},{"type":"text","text":"也被稱爲\"再排序層\",可以在將推薦列表返回用戶之前,爲兼顧結果的\"多樣性\" \"流行度\" \"新鮮度\"等指標,結合一些補充的策 略和算法對推薦列表進行一定的調整,最終形成用戶可見的推薦列表。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"推薦系統常見模型概述與比較","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先來一個推薦算法發展的時間線","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a8/a8af856c12a3489b138272d1611c8972.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以從圖中看出,2016年是推薦系統從傳統機器學習模型到深度學習模型的轉折點,這一年微軟的 Deep Crossing ,谷歌的 Wide&Deep ,以及 FNN 、 PNN 等一大批 優秀的深度學習推薦模型相繼推出,繼而逐漸成爲推薦系統的主流。但傳統推薦模型仍然要被重視,第一它們是深度學習的基礎,很多東西都是一脈相承的,矩陣分解的隱向量思想在Embedding中繼續發展,FM中的核心思想——特徵交叉也在深度學習中繼續使用,邏輯迴歸可以看做神經元的另一種表現形式。第二這些算法的硬件要求低,結果可解釋性強,訓練容易,仍是大量場景所適用的。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"機器學習推薦模型演化過程","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9d/9d36cd95900908aa943d00ed203eabb8.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以從圖中看出,2016年是推薦系統從傳統機器學習模型到深度學習模型的轉折點,這一年微軟的 Deep Crossing ,谷歌的 Wide&Deep ,以及 FNN 、 PNN 等一大批 優秀的深度學習推薦模型相繼推出,繼而逐漸成爲推薦系統的主流。但傳統推薦模型仍然要被重視,第一它們是深度學習的基礎,很多東西都是一脈相承的,矩陣分解的隱向量思想在Embedding中繼續發展,FM中的核心思想——特徵交叉也在深度學習中繼續使用,邏輯迴歸可以看做神經元的另一種表現形式。第二這些算法的硬件要求低,結果可解釋性強,訓練容易,仍是大量場景所適用的。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"協同過濾","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"協同過濾是推薦系統領域應用最廣泛的模型了,而且大家一說到推薦系統,就會直接關聯到協同過濾,而且是基於用戶的協同過濾 or 基於物品的協同過濾,其實從某種程度上理解,矩陣分解也是協同過濾的一種,基於用戶,商品的協同過濾屬於基於近鄰的協同過濾,從更大一點的範圍來說,大多數機器學習和分類算法可以理解爲協同過濾的一個分支,協同過濾可以看做是分類問題的泛化,正是因爲這個原因,適用於分類的許多模型也可以通過泛化應用於協同過濾。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本節主要針對的,還是廣泛意義上理解的,基於近鄰的協同過濾,這類協同過濾,其實就是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"基於用戶-用戶,物品-物品的相似度計算","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於用戶協同過濾","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當用戶需要個性化推薦時,可以先找到與他相似其他用戶(通過興趣、愛好或行爲習慣等,然後把那些用戶喜歡的並且自己不知道的物品推薦給用戶。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"步驟:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"準備用戶向量,在該矩陣中,理論上每個用戶得到一個向量","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"向量維度是物品個數,向量是稀疏的,向量取值可以是簡單的0或1","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"用每個用戶向量,兩兩計算用戶之間相似度,設定一個相似度閾值,爲每個用戶保留與其最相似的用戶","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲每個用戶產生推薦結果","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於物品協同過濾","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於物品的協同過濾算法簡稱,其簡單應用情景是:當一個用戶需要個性化推薦時,例如由於他之前購買過金庸的《射鵰英雄傳》這本書,所以會給他推薦《神鵰俠侶》,因爲其他用戶很多都同時購買了這兩本書。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"步驟:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"構建用戶物品的關係矩陣,可以是購買行爲,或購買後的評價,購買次數等","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"兩兩計算物品相似度,得到物品相似度矩陣","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"產生推薦結果,典型的兩種形式:①爲某個物品推薦相關物品;②個人首頁的“猜你喜歡”","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"計算相似度的方式有如下幾種:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"①餘弦相似度,餘弦相似度( Cosine Similarity )衡量了用戶向量t和用戶向量j之間的向量夾角大小。 顯然,夾角越小,證明餘弦相似 度越大,兩個用戶越相似。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/09/090e31336338116eaac8ea58839531fe.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"②皮爾遜相關係數,相比餘弦相似度,皮爾遜相關係數通過使用用戶平均分對各獨立評分進行修正,減小了用戶評分偏置的影響 。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6a/6a93644fa4bf99be4f2c30b78ec7f6e5.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"矩陣分解","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於矩陣分解,一種講法把它劃歸爲協同過濾,認爲他是基於模型的協同過濾,另一種則認爲他是協同過濾的進化,其實這個到影響不大,矩陣分解是在協同過濾的基礎上,增加了隱向量的概念。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"矩陣分解可以解決一些鄰域模型無法解決的問題:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"①物品之間存在關聯性,信息量不隨向量維度線性增加;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"②矩陣元素稀疏,計算結果不穩定,增減一個向量,結果差異很大。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"矩陣分解把User矩陣和Item矩陣作爲未知量,用它們表示出每個用戶對每個item的預測評分,然後通過最小化預測評分跟實際評分的差異,學習出User矩陣和Item矩陣。也就是說,圖2中只有等號左邊的矩陣是已知的,等號右邊的User矩陣和Item矩陣都是未知量,由矩陣分解通過最小化預測評分跟實際評分的差異學出來的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/15/152ab93e7b728def6b7bfe8c372b5de0.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"矩陣分解用到的用戶行爲數據分爲顯式數據和隱式數據兩種。顯式數據是指用戶對item的顯式打分,比如用戶對電影、商品的評分,通常有5分制和10分制。隱式數據是指用戶對item的瀏覽、點擊、購買、收藏、點贊、評論、分享等數據,其特點是用戶沒有顯式地給item打分,用戶對item的感興趣程度都體現在他對item的瀏覽、點擊、購買、收藏、點贊、評論、分享等行爲的強度上。我們當前主要是隱式數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目標函數通過最小化預測評分和實際評分ruirui 之間的殘差平方和,來學習所有用戶向量和物品向量","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"顯示矩陣目標函數","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c8/c8d7512ae75c9c1d31a60a4b92793cf0.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"隱式矩陣目標函數","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/74/740985be251813d326a2d18195f43eee.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"求解方法:矩陣分解的方法也不止一種,有奇異值分解,梯度下降,交替最小二乘,這裏簡單列舉一個交替最小二乘的例子。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ALS(交替最小二乘法):先固定X 優化Y ,然後固定Y 優化X ,這個過程不斷重複,直到X 和Y 收斂爲止。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏舉一個,顯式矩陣中固定Y優化X的例子,另外固定X優化Y:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目標函數被拆解爲多個獨立目標函數,每個用戶對應一個目標函數,用戶u的目標函數爲:","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ea/ea283d3a634dc55c12ed192867a319da.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"該目標函數轉換爲矩陣形式。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對目標函數J關於Xu求梯度,並令梯度爲零。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"邏輯迴歸→POLY2→FM→FFM","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先邏輯迴歸的想法很巧妙,把推薦系統的推薦問題看做了一個分類問題,爲什麼可以這樣說呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"邏輯迴歸可以通過sigmoid 函數,將輸入特徵向量x=(x1,x2......xn)x=(x1,x2......xn),映射到(0,1)區間,如下圖所示:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/20/20b1fbc5dc65a10cc184384c372d6435.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"邏輯迴歸有着很多優勢:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"數學含義上的支撐,邏輯迴歸的假設是因變量y服從Bernoulli分佈,非常符合我們對於CTR模型的認識,相比之下線性迴歸模型假設y服從高斯分佈,這與我們所理解的CTR預估模型(二分類問題)並不一致。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可解釋性強,邏輯迴歸的簡單數學形式也非常符合人類對 預估過程的直覺認知","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"工程化簡單,具有易於並行化,訓練簡單,訓練開銷小的特點","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但也有一些缺點:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表達能力不強,無法進行特徵交叉、特徵篩選等操作,所以會造成一些信息的損失","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正是由於這個原因,後面出現了POLY2模型,FM模型,FFM模型,接下來我一起說明:","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"POLY2模型——特徵的“暴力”組合","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"該模型對所有特徵進行了兩兩交叉(特徵 Xj1 和 Xjz),並對所有的 特徵組合賦予權重 Wh(j1,j2),本質上還是一個線性模型:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/df/df814a031fdec4ae521b642983ced676.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"FM——隱向量的特徵交叉","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FM與POLY2的主要區別是用兩個向量的內積(Wj1,Wj2) 取代了單一的權重係數Wh(j1,j2)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FM爲每個特徵學習了一個隱權重向量,特徵交叉時,使用兩個特徵隱向量的內積作爲交叉特徵的權重。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"引入隱向量的做法,與矩陣分解的思想異曲同工,是從單純用戶-物品擴展到了所有特徵上。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"把POLY2模型 n^2的權重參數減少到了nk。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ca/caf377a53c13aededa6e34ac11439fea.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FM這樣做的優勢在於,隱向量的引人使 FM 能更好地解決數據稀疏性的問題,泛化能力大大提高。在工程方面, FM 同樣可以用梯度下降法進行學習,使其不失實時性和靈活性 。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"FFM——特徵域","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fb/fb4de5158c5f5a642c79dee28867e70b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FFM 與 FM 的區別在於隱向量由原來的 Wj1 變成了 Wj1,f2,這意味着每個特徵對應的不是唯一一個隱向量,而是一組隱向量,當特徵 xj1 與特徵 xj2 進行交叉時,xj1 會從 xj1 這一組隱向量中挑出與特徵xj2的域f2對應的隱向量 Wj1,f2 進行交叉,同理, xj2 也會用與 xj1的域f1對應的隱向量進行交叉。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型演化的形象化表示","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"POLY2模型","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ac/ac81a594d556955f95a484598e1cda85.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"FM模型","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/26/26fd0eb247ab71612e8061c880ed415b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"FFM模型","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/36/36f1ccb3a2bf2bbf69976281afcf9174.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"傳統機器學習算法比較","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/49/49d57784955e6cde36a7f88e89d8934a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"大廠如何玩轉推薦系統","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"大廠實踐比較","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏選取了幾個比較典型的推薦系統實現,他們分別屬於幾種推薦系統的典型場景","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/bd/bdb547b93d05f45a874f847f0a40bdd7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"深度學習算法比較","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對幾個大廠部分採用了一些深度學習的模型,這裏也調研對比了深度學習模型的特點和優劣勢","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/26/262b98fd50f44bbacecdf80dd9d021c6.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"雲課堂的個性化推薦","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"特徵工程","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"主要選用了用戶行爲數據,用戶行爲數據在推薦系統中有顯性反饋行爲和隱性反饋行爲兩種,在雲課堂場景下,用戶的評分屬於顯性行爲,用戶的購課,學習,做筆記等都屬於隱性行爲。對於這些行爲,我們根據業務重要程度,都給出了初始分數,生成了用戶-課程的初始評分矩陣","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/60/604978a30b3ee82105356e47aff78f34.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"評分矩陣簡單表示如下:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ba/baf6d9923a97a873fd2b13cba90b5f4d.png","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"算法選型","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在個性化推薦系統搭建初期,由於我們是從0到1開始構建,所以並沒有選擇在初期選擇複雜的深度學習算法,以及構建豐富的用戶畫像,希望在初期快速構建一個MVP版本上線,後續逐步反思優化迭代","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以在算法選型上,我們從下面三種方案中進行評估選擇","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於標籤匹配","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於用戶/行爲的協同過濾","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於矩陣分解的協同過濾","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那麼我們是如何進行取捨的?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於方案一,如果希望方案一取得較好的效果,關鍵點在於依賴標籤體系的建設,只有標籤體系足夠完善,也就是說,推薦結果的好壞,是可預計的,強依賴於標籤體系的建設的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"關於方案二,它的缺點在於處理稀疏矩陣的能力較弱,而云課堂中用戶的學習行爲並不能算是高頻行爲,同時頭部效應明顯,而我們希望的是通過個性化推薦系統,挖掘更多隱含的可能性,保留更多平臺上更多平時沒機會暴露的課程,顯然基於近鄰方式的協同過濾,不是一個很合適的選擇。而基於矩陣分解的方法可以一定程度上增強稀疏矩陣的處理能力,同時引入隱向量,可以從用戶行爲中挖掘更多的可能性。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們選用了基於ALS(交替最小二乘法)的矩陣分解模型作爲第一個實踐的算法,採用的是Spark MLlib提供的API。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在ALS模型的構建過程中,需要調整如下幾個參數以取得最好的效果","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/96/96a1eebb329353f4e94ce0a946a1014b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於上面幾個參數,分別調整了幾次參數,以MSE 和 RMSE 作爲評價指標","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"均方誤差( Mean Square Error , MSE)和均方根誤差( Root Mean Square Error , RMSE) 經常被用來衡量回歸模型的好壞。一般情況下, RMSE 能夠很好地反映迴歸模型預測值與真實值的偏離程度 。 但在實際應用時,如果存在個別偏離程度非常大的離羣點 , 那麼即使離羣點數量 非常少 , 也會讓這兩個指標變得很差 。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/99/9991bfe2637a1d99a414da686ae64337.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"工程落地","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個可以落地的推薦系統,數據收集模塊,ETL模塊,特徵工程模塊,推薦算法模塊,Web服務模塊模塊是必不可少的,首先來一個整體架構圖:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/4c/4cf00f00b7a4baa58ed4f3b09c4f263b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來簡單對幾個模塊的實現進行說明:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ae/ae57ebf3646548d1a06755f9fc0493a4.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"參考文獻","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1.《深度學習推薦系統》王喆","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2.《推薦系統 原理與實踐》 Charu C. Aggarwal","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"-END-","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章