DLP-KDD最佳論文作者,談“阿里大規模推薦系統”粗排層的設計與實現

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"InfoQ的讀者大家好,我是 DLP-KDD 2021 的聯合主席王喆,在 DLP-KDD 2021 徵稿之際,我們專訪了上一屆最佳論文(COLD: Towards the Next Generation of Pre-Ranking System)的一作,王哲,與這位跟我同名的阿里算法專家聊一聊業界前沿的大規模推薦系統粗排層的設計和實現。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"希望在今年的 DLP-KDD workshop 上與業界同行們交流的同學,也歡迎大家積極投稿,今年的 workshop 的截稿日期是 2021 年 5 月 20 日,大家可以點擊“閱讀原文”("},{"type":"link","attrs":{"href":"https:\/\/zhuanlan.zhihu.com\/p\/364358132%EF%BC%89%E6%9F%A5%E7%9C%8B%E4%BB%8A%E5%B9%B4","title":"","type":null},"content":[{"type":"text","text":"https:\/\/zhuanlan.zhihu.com\/p\/364358132)查看今年"}]},{"type":"text","text":" DLP-KDD 的具體信息。"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1. 去年 DLP-KDD 的 best paper COLD 的工作非常精彩,業界影響裏非常大,能否簡單的介紹一下它的主要思路?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"COLD 是一個非常典型的算法 - 系統 Co-Design 的工作。它沒有限制模型結構,可以支持任意複雜的深度模型,COLD 的網絡結構是以拼接好的特徵 embedding 作爲輸入,後面是 7 層全連接網絡,包含交叉特徵。整個系統是實時訓練,實時打分,以應對線上分佈的快速變化,對新廣告冷啓也更友好。當然,如果特徵和模型過於複雜,算力和延時都會難以接受。因此我們一方面設計了一個靈活的網絡架構可以進行效果和算力的平衡。另一方面進行了很多工程上的優化以節省算力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/94\/94032e6ceaa019ff9a890ae3134ce4ac.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"粗排層在業界發展的主要階段以及 COLD 的模型結構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2. 能否簡單明瞭的介紹一下深度學習時代,召回、粗排和精排在阿里環境下的主要特點?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/a2\/a2addd39c94dbd3454fe36d7ebd2f216.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"召回,粗排,精排,再排序層的結構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏以阿里媽媽定向廣告爲例做一下介紹。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"召回階段一個主要特點是召回規模較大,阿里媽媽這邊的召回廣告庫在千萬左右。另一個特點是召回的目標是選擇符合後鏈路需要的集合,因此以多路召回方式爲主,通過多種方式進行集合選擇。不同路選擇的廣告分數往往不可比。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前我們在召回使用了基於樹結構的全庫檢索算法 TDM,向量近臨檢索算法以及基於用戶行爲觸發的相似廣告召回等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"粗排階段是從上萬個廣告中選擇上百個送給精排,實時性約束在十幾 ms 以內,除了排序邏輯之外,也包含一些廣告過濾邏輯。粗排是一個承上啓下的過渡模塊,既有召回集合選擇的特點,也可以採用多通道的方式,但是通道與召回相比往往較少。也兼具統一排序的要求以便對召回的多路集合進行統一合併。粗排的天花板是精排,因此可以通過一些技術手段來評估粗排的迭代空間。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"精排階段目前以點擊率預估模型爲主,往往是非常複雜的深度模型,集中了較多的算力資源,延遲往往也較高,除了點擊率 \/ 收藏加購率 \/ 成交率等多目標預估模型之外。後續還有調價模塊基於廣告主目標對價格進行調整以平衡廣告主和平臺收益,同時還有一些策略打散等業務邏輯。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果把整個級聯排序系統比做火車的話,精排就是火車頭,是整個系統效果的天花板,是需要重兵投入的主戰場。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"3. 使用 Embedding+ 簡單運算(內積,簡單網絡)做快速召回 \/ 粗排的做法已經十分成熟,online inference 的效率也很高,它能否代替 COLD?COLD 相比它的主要優勢又在哪裏?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/9d\/9d9dc55ea7df054a89a366c19b832e12.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"雙塔模型的典型結構"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 Embedding+ 簡單運算的方式雖然算力和 RT(Reaction Time)消耗較低,但是在很多對模型效果和實時性要求較高的場景並不能完全代替 COLD。COLD 與之相比沒有對模型結構進行限制,可以使用交叉特徵和更復雜的網絡結構,因此具有更強的擬合能力,同時可以對算力和效果進行靈活平衡。COLD 實時訓練實時打分的架構可以更好的適應數據分佈的快速變化,有利於快速迭代,在冷啓動上也更爲友好。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"4. 現在業界有一個趨勢是做分層的融合,比如粗排和精排的融合,甚至召回和排序的融合,你覺得這個趨勢會不會發展下去,有哪些難點?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正所謂分久必合,合久必分。召回 - 粗排 - 精排這種級聯排序架構,是在當初算力 RT 不足情況下的一種折中。當前確實存在分層融合的趨勢。例如在召回階段,我們也在嘗試用排序的方式,突破近似檢索的瓶頸,直接進行全庫打分。而粗排階段也在探索粗排和精排的聯合訓練,一次訓練同時產出多個不同結構的模型,粗排模型只是其中一個結構較爲簡化的版本。這個趨勢一方面是因爲深度學習時代算法技術的突破,使整個級聯架構在模型結構上的統一成爲了可能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另一方面也要得益於 GPU\/TPU\/NPU 等硬件帶來的算力紅利釋放。隨着各模塊技術水平不斷增加,單點迭代的難度也越來越高,從整個級聯排序架構的視角進行改進,進行模塊間的融合,這個趨勢未來會繼續發展下去。而技術上的難點一方面在於樣本選擇偏差問題,即前鏈路(召回 \/ 粗排)的 inference 空間和訓練空間存在較大差異,從而影響了融合的效果。另一方面則是在打分規模不斷提升的情況下,如何控制算力和 RT 的增長。同時不同模塊之間應該如何更好的交互,也值得進一步的研究。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"5. 你如何對比 COLD 的工作和知識蒸餾的區別?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"COLD 是一種新的粗排實時排序架構,知識蒸餾是一種提升模型效果的技術手段,兩者並不衝突。目前我們也在嘗試,在當前 COLD 粗排模型的基礎上,通過和精排模型的聯合訓練以及知識蒸餾,取得了進一步的線上效果提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/c3\/c39da006bb859c8f0a5e326d222dee1b.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"通過知識蒸餾縮小模型體積"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"6. COLD 使用了 online learning,能否介紹一下你們用到的具體方法?是在什麼平臺(TensorFlow?Flink?Server 內部?)上進行的訓練?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"COLD 使用的 Online Learning 技術,實時數據流是基於阿里媽媽自研的星雲 ODL 系統,底層是 Blink(Blink 是阿里巴巴通過改進 Flink 項目而創建的阿里內部產品)。訓練基於的是阿里媽媽內部自研的深度訓練框架 XDL。模型目前是實時訓練,小時級更新(我們的業務場景小時級已經足夠,整個系統可以支持更高的更新頻率)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/aa\/aa509da7dc27b57f59a98eecfcccf0b2.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"阿里基於 Flink 開發自研數據流平臺 Blink"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"7. Model serving 一直是業界的一個難點,COLD 模型是如何部署到線上的?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"粗排的線上打分系統主要包含兩部分:特徵計算和網絡計算。特徵計算部分主要負責從索引中拉取用戶和廣告的特徵並且進行交叉特徵的相關計算。而網絡計算部分,會將特徵轉成 embedding 向量,並將它們拼接進行網絡計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了將 COLD 部署到線上,工程上進行了很多優化,包括:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"並行化:爲了實現低時延高吞吐的目標,並行計算是非常重要的。而粗排對於不同的廣告的計算是相互獨立的,因此可以將計算分成並行的多個請求同時進行,並在最後進行結果合併。特徵計算部分使用了多線程方式進一步加速,網絡計算部分使用了 GPU 以及和達摩院合作的 NPU 專用硬件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"行列轉化:特徵計算的過程可以抽象看做兩個稀疏矩陣的計算,一個是用戶矩陣,另一個是廣告矩陣。矩陣的行是 batch_size,對於用戶矩陣來說 batch_size 爲 1,對於廣告矩陣來說 batch_size 爲廣告數,矩陣的列是 featue group 的數目。常規計算廣告矩陣的方法是逐個廣告計算在不同 feature group 下特徵的結果,這個方法符合通常的計算習慣,組合特徵實現也比較簡單,但是這種計算方式是訪存不連續的,有冗餘遍歷、查找的問題。事實上,因爲同一個 feature group 的計算方法相同,因此可以利用這個特性,將行計算重構成列計算,對同一列上的稀疏數據進行連續存儲,之後利用 MKL 優化單特徵計算,使用 SIMD (Single Instruction Multiple Data) 優化組合特徵算子,以達到加速的目的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/29\/29f4d9ed12c8b472b039eda069424249.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Float16 加速: 對於 COLD 來說,絕大部分網絡計算都是矩陣乘法,而 NVIDIA 的 Turning 架構對 Float16 的矩陣乘法有額外的加速,因此我們將粗排模型做了 Float16 轉化。使用 Float16 以後,CUDA kernel 的運行性能有顯著提升,同時 kernel 的啓動時間成爲了瓶頸。爲了解決這個問題,我們使用了 MPS (Multi-Process Service) 來解決 kernel 啓動的開銷。Float16 和 MPS 技術,可以帶來接近 2 倍的 QPS 提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"8. 現在大家越來越強調 Algorithm-System Codesign,我想 COLD 應該是這方面非常成功的案例,能否介紹一下你們的經驗?如何做到不同模型和工程團隊 \/ 成員之間的良好配合?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 COLD 之前,粗排和精排是兩套獨立的模型訓練和線上打分的架構,迭代維護很不方便。COLD 在架構上解耦了排序引擎和在線打分模塊,統一了粗排和精排的 ODL 訓練迭代體系和在線打分體系,一方面降低了迭代維護成本,另一方面也便於工程團隊進行打分性能的專門優化。同時 COLD 的架構可以很靈活的對算力和效果進行平衡,不再受限於雙塔結構。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"團隊成員配合上,首先我們團隊很早就意識到了 Algorithm-System Co-design 的重要性,因此專門成立了效能團隊,作爲聯接算法和工程的橋樑,以算力優化和迭代效率作爲切入點,發揮了很重要的作用。算法團隊,效能團隊和工程團隊相互配合,工作上各有側重。算法團隊主要負責模型的迭代維護和效果上的持續提升。效能團隊會在算法迭代的早期就介入,進行算力和模型的平衡。同時效能團隊也通過改進優化算法的迭代鏈路(特徵加工 \/ODL 流 \/ 訓練框架等)來幫助提升迭代效率。工程團隊會更側重於線上引擎以及打分模塊的性能優化和功能需求支持。通過這種合作模式保證了 COLD 的上線。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"9. 就你個人而言,有沒有遇到模型 \/ 系統改進的瓶頸期,如何破局?站在 2021 年,你如何預測未來算法發展的紅利在哪?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有遇到過,這裏還是以 COLD 作爲例子。在 COLD 上線之後,粗排進一步迭代到了瓶頸期,很多模型優化都難以取得進一步的效果提升。我破局的思路就是跳出粗排這個單一模塊,站在整個排序鏈路的視角重新思考問題。通過在離線分析發現粗排和精排的對齊程度已經很高,粗排進一步迭代空間有限。因此需要從前鏈路的召回入手,通過提升召回進入粗排的廣告數目和質量,來進一步打開粗排的迭代空間。因此我在召回從全鏈路目標對齊的視角出發,做了一些技術上的創新突破,構建了一些對齊後鏈路目標的召回通道,不僅打開了召回的迭代空間,也打開了粗排的空間。後面又回到粗排,一方面繼續從全鏈路目標對齊的視角出發,對粗排做進一步的技術升級。同時也圍繞樣本選擇偏差問題,把召回迭代過程中積累的技術經驗遷移到粗排,取得了很好的線上效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"至於未來算法發展的紅利,這是個挺大也挺難回答的問題。這裏我斗膽發表一點自己的看法。未來在下面幾個方向,有可能存在一些紅利:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"端智能:用戶移動端設備的性能越來越強,這裏潛藏着龐大的算力資源有待挖掘利用。同時端上有用戶更實時更豐富的行爲信息,對提升模型預估精度也有很大幫助。端上和服務端如何能更好地配合協同是非常值得研究的問題。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/f1\/f199d68bb29cfb19196ca70761f23ea2.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"端智能 \/ 邊緣計算經典案例——EdgeRec"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":2,"normalizeStart":2},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"排序架構升級:目前的級聯排序架構,各模塊之間一般獨立迭代,在對齊最終目標的過程中很容易因爲各模塊的差異造成鏈路損耗影響最終效果。如何能更好地進行優化從而實現全鏈路的目標對齊,如何突破級聯排序架構構建一個更優的排序架構體系是很值得探索的。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"算力的全局最優化分配:之前大家對算力的優化往往集中在單點。如果能真正把算力作爲一個變量,在整個系統鏈路進行全局最優分配,是有可能進一步釋放一部分算力空間的。(推薦參考論文 DCAF: A Dynamic Computation Allocation Framework for Online Serving System )"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"10. 給剛入行的同學說兩句話吧,有哪些成功的經驗和失敗的教訓可以對他們講?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"算法工程師的核心競爭力不僅僅在於對模型的理解,更關鍵的在於對業務的深入理解,在於能否幫助業務解決實際問題。有時候如果能跳出對模型等細節的關注,站在全局的視角,不僅有助於解決問題,也可以發現新的機會。就像 COLD 之後,我面臨粗排後面如何進一步迭代的問題,在粗排和精排模型對齊程度已經很高的情況下,繼續想辦法去優化粗排模型就會面臨很大困難,但是跳出粗排從整個鏈路的視角看待問題,就柳暗花明又一村。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"專家簡介:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"王哲,花名公渡,2017 年中國科學技術大學計算機專業碩士畢業。曾在螞蟻金服負責跨境遊的推薦營銷算法。目前爲阿里媽媽展示廣告團隊算法專家,負責粗排及全鏈路聯動的相關工作,有多篇頂會論文。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"DLP-KDD Workshop 介紹:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在國際頂級會議 KDD 召開之際,在國際頂級會議 KDD 召開之際,來自阿里巴巴 \/ 微軟 \/ 華爲 \/Roku,以及上海交通大學 \/ 猶他大學等工業界 \/ 學術界資深同行,攜手舉辦全球第三屆面向高維稀疏數據的深度學習實踐國際研討會(The 3rd International Workshop on Deep Learning Practice for High-Dimensional Sparse Data with KDD 2021,簡稱 DLP-KDD 2021),在此誠摯邀請學術界及工業界供稿。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DLP-KDD 2021 的徵稿結束日期是 2021 年 5 月 20 日,詳細投稿信息請點擊“閱讀原文”進行查看。("},{"type":"link","attrs":{"href":"https:\/\/zhuanlan.zhihu.com\/p\/364358132","title":"","type":null},"content":[{"type":"text","text":"https:\/\/zhuanlan.zhihu.com\/p\/364358132"}]},{"type":"text","text":" )"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章