Uber機器學習在線服務及模型的持續集成和部署實踐

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"前言"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在過去幾年,Uber 各種組織和用例中的機器學習應用明顯增多。我們的機器學習模型實時爲用戶提供了更好的體驗,幫助預防安全事故並確保市場效率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/4y\/02\/4yyc78c08e66cb6772ca2e0ea8a5c202.jpg","alt":null,"title":"圖 1:模型和服務二進制 CI\/CD 的高級視圖","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要注意的一點是,我們對模型和服務進行了持續集成(CI)和持續部署(CD),如上圖所示。因爲訓練和部署的模型增長迅速,我們在經過多次迭代後,終於找到了解決 "},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/MLOps","title":null,"type":null},"content":[{"type":"text","text":"MLOps"}]},{"type":"text","text":" 挑戰的解決方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具體來說,主要有四大挑戰。第一個挑戰是每天要支持大量模型部署,同時保持實時預測服務的高度可用。在模型部署一節,我們將討論這項挑戰的解決方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個挑戰是,在部署新的重新訓練的模型時,與實時預測服務實例相關的內存佔用增加了。許多模型還增加了實例(重新) 啓動時下載和加載模型所需的時間。在實施新模型時,我們發現,很大一部分舊模型沒有收到流量。在模型自動退役一節中,我們將討論這項挑戰的解決方案。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三個挑戰涉及到模型推出策略。機器學習工程師可以通過不同的階段推出模型,如遮蔽、測試或實驗。我們注意到了一些模型推出策略的常見模式,並決定把它納入實時預測服務中。對於這項挑戰,我們將在自動遮蔽一節中對其進行討論。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們管理的是一個實時預測服務的集羣,因此不可以選擇人工服務軟件部署。最後一項挑戰是爲實時預測服務軟件制定一個 CI\/CD 故事。在模型部署期間,模型部署服務通過對樣本數據的預測調用對候選模型進行驗證。但是,它不會檢查部署到實時預測服務的現有模型。即便模型通過了驗證,也不能保證在部署到生產實時預測服務實例時,該模型可以被使用或表現出相同的行爲(用於特徵轉換和模型評估)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"出現這種情況的原因是,兩個實時預測服務版本之間可能會有依賴關係的改變,或者服務構建腳本改變。在持續集成和部署一節中,我們將討論這項挑戰的解決方案。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"模型部署"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"要管理在實時預測服務中運行的模型,機器學習工程師可以通過模型部署 API 來部署新的模型和退役未使用的模型。他們可以通過 API 跟蹤模型的部署進度和運行狀態。在圖 2 中,你可以看到系統的內部架構:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/74\/30\/74d917afbf2ea62aa046fdc1d6357430.jpg","alt":null,"title":"圖 2:模型部署工作流和運行狀況檢查工作流","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"動態模型加載"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"過去,我們將模型構件封入實時預測服務的 Docker 鏡像,並與服務一起部署模型。由於模型的快速部署,這一繁重的過程成爲模型迭代的瓶頸,並導致模型和服務開發者之間的中斷。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對這一問題,我們實現了動態模型的加載。模型構件和配置存儲保存了生產環境中應該爲哪些模型提供服務的目標狀態。一個實時預測服務會定期檢查這個存儲,比較它與本地狀態,從而觸發對新模型加載和刪除退役模型。動態模型加載將模型與服務器的開發週期解耦,從而加快生產模型的迭代速度。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"部署模型工作流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"並非簡單地將訓練好的模型推送到模型構件和配置存儲中,它通過多個步驟創建獨立的、已驗證的模型包:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"構件驗證"},{"type":"text","text":":確保所訓練的模型包含服務和監控所必需的所有構件。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"編譯"},{"type":"text","text":":將所有模型構件和元數據打包到一個自包含的可加載包中,並將其打包到實時預測服務中。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"服務驗證"},{"type":"text","text":":在本地加載編譯好的模型 jar,並用訓練數據集中的樣本數據進行模型預測——這一步確保了模型能夠運行,並且與實時預測服務兼容。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"之所以這樣做,主要是爲了保證實時預測服務的穩定性。因爲在同一個容器中加載了多個模型,一個錯誤的模型可能會導致預測請求失敗,並且有可能中斷同一個容器上的模型。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"模型部署跟蹤"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"要幫助機器學習工程師管理他們的生產模型,我們可以對部署模型進行跟蹤,如上圖 2 所示。該方案由兩部分組成:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"部署進度跟蹤"},{"type":"text","text":":部署工作流將發佈部署進度更新到一個集中式元數據存儲中以便跟蹤。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"運行狀況檢查"},{"type":"text","text":":模型在完成其部署工作流之後,將成爲模型運行狀況檢查的候選對象。這個檢查定期進行,以跟蹤模型的健康狀況和使用信息,並將更新信息發送到元數據存儲。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"模型自動退役"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有一個 API 可以讓未使用的模型退役。但是,在很多情況下,人們忘記了這樣做,或者沒有將模型清理納入他們的機器學習工作流中,這樣會造成不必要的存儲成本和增加內存佔用。大量的內存佔用會導致 "},{"type":"link","attrs":{"href":"https:\/\/www.oracle.com\/webfolder\/technetwork\/tutorials\/obe\/java\/gc01\/index.html","title":null,"type":null},"content":[{"type":"text","text":"Java 垃圾收集"}]},{"type":"text","text":"暫停和內存不足,這兩種情況都會影響"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Quality_of_service","title":null,"type":null},"content":[{"type":"text","text":"服務質量"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲解決這一問題,我們建立了一個模型自動退役流程,所有者可以爲模型設定一個到期時間。若模型在到期後未使用,則上圖 1 中的自動退役工作流會爲相關用戶觸發警告,並使模型退役。當啓用該特性之後,我們看到了資源佔用率的顯著下降。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"自動遮蔽"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着機器學習工程師選擇採用不同的策略推出模型,他們經常需要設計一種在一組模型中分配實時預測流量的方法。我們看到了它們的流量分配策略中的一些常見模式,比如漸進式推出和遮蔽。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過漸進式推出,用戶複製流量,並逐步在一組模型中改變流量分佈。對於遮蔽過程,客戶端複製初始(主) 模型的流量,並將其應用於另一個(遮蔽)模型。圖 3 顯示了一組模型之間典型的流量分佈,其中模型 A、 B、 C 參與漸進式推出,而模型 D 則對模型 B 進行遮蔽。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/e0\/a5\/e0563b7a43c345f345e8dd3128cc8ba5.jpg","alt":null,"title":"圖 3:一組模型之間的實時流量預測分佈","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲減少對常見模式開發重複實現的工程時間,實時預測服務提供了用於流量分佈的內置機制。接下來我們關注的是自動遮蔽模型的情況。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然不同團隊採用不同的模型遮蔽策略,但具有共性:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"來自生產數據的模型預測結果並不用於生產,而是爲了分析收集。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"遮蔽模型與其主模型共享大部分特徵,這在定期重新訓練和更新模型的用戶工作流中尤其如此。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在遮蔽停下來之前,通常要持續數天或數週的時間窗。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個主模型可以被多個遮蔽模型所影射;一個遮蔽模型可以影射多個主模型。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"遮蔽流量可以是 100% 的,也可以根據主要模型流量的一些標準來挑選。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲了比較結果,對主模型和遮蔽模型都收集了相同的預測。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一個主模型可能要爲數百萬次的預測提供服務,預測日誌可能會被採樣。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在模型部署配置中,自動隱藏配置只是其中的一項工作。實時預測服務可以檢查自動遮蔽的配置,並相應地分配流量。用戶僅需通過 API 端點設定遮蔽關係和遮蔽標準(遮蔽內容,遮蔽時間長短),並確保增加遮蔽模型所需的功能,而非主模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們發現內置的自動遮蔽功能帶來了額外的好處:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大部分的主模型和遮蔽模型具有一套共同的特徵,實時預測服務只能從在線的特徵庫提取主模型中未使用的特徵,從而用於遮蔽模型。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過結合內置的預測日誌邏輯和遮蔽採樣邏輯,實時預測服務可以將遮蔽流量減少到那些註定要被記錄的流量。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果服務受到壓力,可以將遮蔽模型看作是一個二級模型,並暫停 \/ 恢復,以緩解負載壓力。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"持續集成和部署"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們依靠 CI\/CD 爲一個實時預測服務的集羣進行服務發佈部署。由於我們支持"},{"type":"link","attrs":{"href":"https:\/\/eng.uber.com\/scaling-michelangelo\/","title":null,"type":null},"content":[{"type":"text","text":"關鍵的業務用例"}]},{"type":"text","text":",除了在模型部署期間進行驗證之外,我們還需要確保對自動持續集成和部署過程的高度信任。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們的解決方案嘗試通過新版本解決下列問題:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"代碼變化不兼容"},{"type":"text","text":":這個問題可能有兩個症狀 —— 模型無法加載或用新的二進制文件進行預測,或者其行爲會隨着新版本的發佈而改變。後一種方法難以識別和修正,對模型的正確性也至關重要。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"依賴關係不兼容"},{"type":"text","text":":由於基礎依賴關係的改變,服務無法啓動。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"構建腳本不兼容"},{"type":"text","text":":由於構建腳本的改變,版本無法構建。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"針對以上問題,我們採用了三個階段的策略來驗證和部署二進制文件的最新實時預測服務:staging 集成測試、金絲雀集成測試以及產品發佈。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"staging 集成測試和金絲雀集成測試將運行於非生產環境。staging 集成測試用於驗證基本功能,當 staging 集成測試通過後,我們將運行金絲雀集成測試來確保所有產品模型的服務性能。在確保生產模型的行爲不變後,以滾動部署的方式,將該版本部署到所有實時預測服務的生產實例上。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"最後的想法"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們已經分享了我們針對一些 MLOps 挑戰的解決方案。隨着我們發展 Uber 的機器學習基礎設施和平臺並支持新的機器學習用例,我們看到新的 MLOps 挑戰出現。其中,有幾個方面是:幾乎實時地監測推理的準確性、特徵質量和業務指標;部署和維護"},{"type":"link","attrs":{"href":"https:\/\/en.wikipedia.org\/wiki\/Multi-task_learning","title":null,"type":null},"content":[{"type":"text","text":"多任務學習"}]},{"type":"text","text":"和混合模型;進行特徵驗證;改進模型迴歸機制;模型可追溯性和可調試性等。敬請關注。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介紹:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Joseph Wang,Uber 機器學習平臺團隊軟件工程師。住在舊金山。致力於特徵存儲、實時模型預測服務、模型質量平臺和模型性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Jia Li,Uber 機器學習平臺團隊高級軟件工程師。致力於模型部署、實時預測和模型監控。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Yi Zhang,Uber 機器學習平臺團隊高級軟件工程師。在解決從數據基礎設施到數據應用層的大數據問題方面表現出色。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Yunfeng Bai,Uber 機器學習平臺團隊的 TLM 成員。他領導團隊在模型管理和實時預測方面作出了相關努力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/eng.uber.com\/continuous-integration-deployment-ml\/"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章