鬆散耦合深度學習Serving的優勢和部署案例

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":5},"content":[{"type":"text","text":"本文要點"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着深度網絡變得更加專業化,對資源的需求也越來越大,對於初創公司和規模化擴張的企業而言,在預算緊張的環境中爲這些運行在加速硬件上的網絡提供服務(serving)也變得越來越困難。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"鬆散耦合的架構可能是更好的選項,因爲它們在爲深度網絡提供服務時具有高度可控性、易適應性、透明可觀察性和自動縮放性(成本效益較高)。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"任何規模的公司都可以利用各種託管雲組件(例如函數、消息服務和API網關),使用相同的服務基礎架構來處理公共和內部請求。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"託管消息代理帶來了輕鬆的可維護性,無需專門的團隊負責維護工作。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"適應無服務器和鬆散耦合組件後,深度學習解決方案的開發和交付速度也可能會提升。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着深度學習在許多行業的應用範圍不斷擴大,深度網絡的規模和特異性也在增加。大型網絡需要更多資源,並且由於它們的任務特定性(如定製的函數\/層),將它們從急切執行(eager execution)編譯爲運行在CPU或FPGA後端上的優化計算圖可能是無法做到的。因此,這種模型在運行時可能需要顯式GPU加速和自定義配置。然而,深度網絡都是在資源有限的約束環境中運行的,環境中雲GPU的價格頗爲昂貴,且低優先級(即競價、搶佔式)實例相當稀缺。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於調用者和API處理者之間的緊密時間耦合,使用常見的機器學習服務框架將這些網絡投入生產,可能會給機器學習工程師和架構師帶來很多麻煩。這種情況是非常有可能出現的,特別是對於採用深度學習的初創公司和規模化擴張的公司來說更是如此。從GPU內存管理到縮放,他們在服務深度網絡時可能會面臨多個問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在本文中,我將重點介紹一種部署深度網絡的替代方法的優勢,這種方法會暴露基於消息的中介。這會放鬆我們在REST\/RPC API服務中看到的緊密時間耦合,並帶來異步運行的深度學習架構,在初創和擴張公司工作的工程師會更喜愛這種架構。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我將使用四大指標來對比這個服務架構與REST框架:"},{"type":"text","marks":[{"type":"strong"}],"text":"可控性"},{"type":"text","text":"、"},{"type":"text","marks":[{"type":"strong"}],"text":"適應性"},{"type":"text","text":"、"},{"type":"text","marks":[{"type":"strong"}],"text":"可觀察性"},{"type":"text","text":"和"},{"type":"text","marks":[{"type":"strong"}],"text":"自動縮放性"},{"type":"text","text":"。我將進一步展示如何使用一系列現代雲組件輕鬆地將REST端點添加到面向消息的系統中。本文將要討論的所有想法都是雲和語言無關的。它們也可以用在本地服務器上託管的場景中。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"服務深度網絡的挑戰"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"深度網絡是由一些高度級聯的非線性函數組成的,這些函數形成應用於數據的計算圖。在訓練階段,網絡使用選定的輸入\/輸出對以最小化選定目標的方式來調整這些圖的參數。在推理時,輸入數據簡單地流過這個優化圖。正如上述介紹所示,任何深度網絡都要面對的一個明顯的挑戰就是它的計算密集度。知道了這一點後,你可能會驚訝地發現,基於緊密時間耦合的REST\/RPC API調用是服務深度網絡的最常見方式。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"只要構成深度網絡的圖可以通過層的融合、量化或剪枝進行優化,基於API的服務就不會引發任何問題。然而,這種優化並不總能得到保證,尤其是在將研究網絡轉移到生產環境時往往會出現優化不足的問題。研發階段產生的大多數想法都具有特異性,爲優化計算圖而創建的通用框架可能不適用於這種網絡(例如,Pytorch\/ONNX JIT無法編譯具有"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/pdf\/1710.05941.pdf","title":"","type":null},"content":[{"type":"text","text":"Swish"}]},{"type":"text","text":"激活函數的層)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在這種情況下,由REST API帶來的緊密耦合是不理想的。提高推理速度的另一種方法是將圖編譯到專用硬件上運行,這種專用硬件設計爲能夠很好地並行化圖中執行的計算(例如FPGA、ASIC)。但同樣,由於自定義函數需要通過硬件描述語言(如Verilog、VHDL)集成到FPGA中,因此特異性問題是沒辦法用這種方式來處理的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"考慮到深度網絡還將繼續擴大規模,並根據行業需求變得越來越專業化,預計在不久的將來推理時也會用上顯式GPU加速了。因此,分離調用者和服務函數之間的同步接口,並允許高度可控的基於拉取的推理,在許多層面上都更具優勢。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"打破緊密的時間耦合"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以向系統添加額外的中間件服務來放鬆時間耦合。在某種程度上,這更像是通過電子郵件服務提供商與你的鄰居交流,而不是在鄰居家窗外喊話。使用消息中間件(例如RabbitMQ、Azure服務總線、AWS SQS、GCP Pub\/Sub、Kafka、Apache Pulsar、Redis)後,目標現在可以完全靈活地處理調用者的請求(就像鄰居可能會忽略你的電子郵件,直到他\/她喫完晚餐爲止)。這是特別有利的,因爲從工程師的角度來看,它實現了"},{"type":"text","marks":[{"type":"strong"}],"text":"高度可控"},{"type":"text","text":"。考慮在具有8Gb內存的GPU上部署2個深度網絡(在推理時需要3Gb和6Gb內存)的情況。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在基於REST的系統中,可能需要事先採取預防措施來確保這兩種模型的worker不會過度使用GPU內存,否則由於直接調用,某些請求將失敗。另一方面,如果使用一個隊列,worker可能會選擇推遲工作,等到稍後有內存可用時再處理。由於這是異步處理的,因而調用者不會被阻塞並且可以繼續執行其工作。這種場景尤其適合公司內部的請求,例如時間約束相對寬鬆的請求,但這種隊列也可以使用雲組件(例如無服務器函數)實時處理客戶端或合作伙伴的API請求,如下一節所述。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"選擇消息中介的DL服務的另一個令人信服的理由是其"},{"type":"text","marks":[{"type":"strong"}],"text":"簡單的適應性"},{"type":"text","text":"。如果想要充分利用Web框架和庫的潛力,我們就一定要面對它們的學習曲線,即便是Flask這樣的微框架也不例外。另一方面,我們並不需要了解消息中間件的內部結構,而且所有主流雲供應商都提供自己的託管消息服務,於是工程師就用不着再管維護工作了。這在"},{"type":"text","marks":[{"type":"strong"}],"text":"可觀察性"},{"type":"text","text":"方面也有很多優勢。由於消息傳輸通過顯式接口與主要的深度學習worker分離,因此可以獨立聚合日誌和指標。在雲上,這甚至可能都用不着了,因爲託管消息傳遞平臺會使用儀表板和警報等附加服務自動處理日誌記錄。這樣的隊列機制本身也適合自動縮放。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於高可觀察性,隊列帶來了選擇如何"},{"type":"text","marks":[{"type":"strong"}],"text":"自動縮放"},{"type":"text","text":"worker的自由。在下一節中,我們將使用"},{"type":"link","attrs":{"href":"https:\/\/keda.sh\/","title":"","type":null},"content":[{"type":"text","text":"KEDA"}]},{"type":"text","text":"(Kubernetes事件驅動的自動縮放)展示DL模型的可自動縮放容器部署。它是一個開源的基於事件的自動縮放服務,旨在簡化K8s pod的自動管理。它目前是一個雲原生計算基金會(Cloud Native Computing Foundation)沙箱項目,支持多達30個縮放器,從Apache Kafka到Azure服務總線(Azure Service Bus)、AWS SQS和GCP Pub\/Sub都在支持之列。KEDA的參數讓我們可以根據傳入的數據量(例如等待消息的數量、持續時間和負載大小)自由地優化縮放機制。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一個部署示例"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在本節中,我們將使用Pytorch worker容器和Azure上的Kubernetes展示一個模板部署示例。除了網絡權重、輸入和可能的輸出圖像等大型工件外,數據通信將由Azure服務總線處理。它們應該存儲在blob存儲中,並使用用於blob的Azure Python SDK從容器下載\/上傳。該架構的高級概述見下圖1。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/res.infoq.com\/articles\/loosely-coupled-deep-learning-serving\/en\/resources\/1fig-1-1627304648731.jpg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圖1:所提議架構的高級概述。對於每個塊,括號中給出了相應的Azure服務。它可以處理使用無服務器函數的外部REST API請求和直接來自隊列的內部請求。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們將使用Azure服務總線隊列和KEDA實現一個可自動縮放的"},{"type":"link","attrs":{"href":"https:\/\/www.enterpriseintegrationpatterns.com\/patterns\/messaging\/CompetingConsumers.html","title":"","type":null},"content":[{"type":"text","text":"競爭消費者模式"}]},{"type":"text","text":"服務器。要啓用"},{"type":"link","attrs":{"href":"https:\/\/www.enterpriseintegrationpatterns.com\/patterns\/messaging\/RequestReply.html","title":"","type":null},"content":[{"type":"text","text":"請求-回覆"}]},{"type":"text","text":"模式來處理REST請求,可以使用"},{"type":"link","attrs":{"href":"https:\/\/docs.microsoft.com\/en-us\/azure\/azure-functions\/durable\/durable-functions-external-events?tabs=python#send-events","title":"","type":null},"content":[{"type":"text","text":"Azure Durable Function外部事件"}]},{"type":"text","text":"。在示例架構中,我們假設一個持久函數已準備就緒,並通過服務總線隊列將反饋事件回覆URL傳輸到worker線程,Azure文檔中解釋了設置此服務的細節。KEDA允許我們使用隊列長度設置縮放規則,這樣K8s中的worker pod數量將根據負載自動更新。我們還將一個worker容器(或在我們的例子中的多個容器)綁定到一個GPU上,這樣我們就可以自動縮放任何託管集羣,並向我們的系統添加更多GPU機器,而不會出現任何麻煩。K8s自動處理集羣的自動縮放以解決資源約束(即由於GPU數量不足導致的"},{"type":"link","attrs":{"href":"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/node-pressure-eviction\/","title":"","type":null},"content":[{"type":"text","text":"節點壓力"}]},{"type":"text","text":")。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以在這個"},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving","title":"","type":null},"content":[{"type":"text","text":"Github存儲庫"}]},{"type":"text","text":"中找到描述如何爲常規ResNet分類器提供服務的詳細模板。文中將顯示每個塊的縮短版本。第一步,我們來創建我們的深度網絡服務函數("},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving\/blob\/master\/network.py","title":"","type":null},"content":[{"type":"text","text":"network.py"}]},{"type":"text","text":")。初始化推理函數的模板類可以寫成如下形式,這可以根據手頭的任務(例如分割、檢測)進行定製:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"class Infer(object):\n __slots__ = [\"\"]\n \n # 初始化推理函數(例如,從blob下載權重)\n def __init__(self, tuned_weights=None):\n pass\n \n # 執行推理\n @torch.no_grad()\n def __call__(self, pil_image):\n pass\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在原始函數中,我們返回前5個ImageNet類別的類ID。隨後,我們準備編寫我們的worker Python函數("},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving\/blob\/master\/run.py","title":"","type":null},"content":[{"type":"text","text":"run.py"}]},{"type":"text","text":"),這裏我們將模型與Azure服務總線集成在一起。如下面的片段所示,用於服務總線的Azure Python SDK支持對傳入的消息隊列進行非常簡單的管理。PEEK_LOCK模式允許我們明確控制何時完成或放棄傳入請求:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"...\nwith servicebus_client:\n receiver = servicebus_client.get_queue_receiver(\n queue_name=self.bus_queue_name,\n receive_mode=ServiceBusReceiveMode.PEEK_LOCK,\n )\n with receiver:\n for message in receiver:\n # 使用輸入數據執行服務\n data = json.loads(str(message))\n ...\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此時我們已經準備好了模板worker,現在我們來創建容器的Dockerfile並將其推送到Azure容器註冊表(Container Registry)。這裏requirements.txt包含我們的worker的額外pip依賴項。通過exec運行主進程可以被視爲一種確保它作爲PID 1進程運行的技巧。這讓集羣能夠在發生任何錯誤時自動重啓pod,而無需在部署YAML文件中寫入顯式活動端點。請注意,指定健康檢查仍然是更好的做法:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"FROM pytorch\/pytorch:1.8.0-cuda11.1-cudnn8-runtime\nRUN python -m pip install -r requirements.txt\nCOPY . \/worker\nWORKDIR \/worker\nCMD exec python run.py\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"NAME= # SET\nVERSION= # SET\nACR= # SET\nsudo docker build -t $ACR.azurecr.io\/$NAME:$VERSION -f Dockerfile .\nsudo docker push $ACR.azurecr.io\/$NAME:$VERSION\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"創建K8s集羣后,不要忘記從門戶啓用節點自動縮放特性(例如,最小1,最大8)。作爲最後的準備步驟,我們需要在集羣中啓用"},{"type":"link","attrs":{"href":"https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/gpu-cluster","title":"","type":null},"content":[{"type":"text","text":"GPU驅動程序"}]},{"type":"text","text":"(到gpu-resources命名空間)並通過官方YAML文件部署"},{"type":"link","attrs":{"href":"https:\/\/github.com\/kedacore\/keda\/releases","title":"","type":null},"content":[{"type":"text","text":"KEDA服務"}]},{"type":"text","text":"(到keda命名空間)。爲方便讀者,Keda和GPU驅動程序YAML文件已包含在存儲庫中:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"kubectl create ns gpu-resources\nkubectl apply -f nvidia-device-plugin-ds.yaml\nkubectl apply -f keda-2.3.0.yaml\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下一步,我們可以通過準備好的shell腳本部署worker容器。首先,我們創建命名空間來部署我們的服務:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"kubectl create namespace ml-system\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"請注意,使用shell文件而不是普通的YAML可以幫助我們輕鬆更改參數。運行部署腳本("},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving\/blob\/master\/deploy.sh","title":"","type":null},"content":[{"type":"text","text":"deploy.sh"}]},{"type":"text","text":")後,我們就準備就緒了(不要忘記根據你的需要設置參數):"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"bash deploy.sh\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於我們限制每個pod使用一個GPU,因此通過KEDA縮放pod也將有效地縮放集羣節點。這會讓整體架構具有很高的成本效益。在某些情況下,你甚至可以將最小節點數設置爲零並在worker空閒時砍掉GPU成本。但是,我們做這種配置時必須非常小心,並考慮好節點的縮放時間。部署腳本中使用的KEDA參數的細節可以在官方"},{"type":"link","attrs":{"href":"https:\/\/keda.sh\/docs\/2.3\/scalers\/azure-service-bus\/","title":"","type":null},"content":[{"type":"text","text":"文檔"}]},{"type":"text","text":"中找到。在部署腳本("},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving\/blob\/master\/deploy.sh","title":"","type":null},"content":[{"type":"text","text":"deploy.sh"}]},{"type":"text","text":")中,如果你仔細查看,你會發現我們將NVIDIA_VISIBLE_DEVICES環境變量設置爲“all”,來嘗試從另一個容器(worker-1)訪問GPU。這個技巧讓我們能夠同時利用集羣縮放和一個pod中的多個容器。如果不設置此項,由於worker-0的“limit”約束,K8s將不允許爲每個GPU添加更多容器。工程師應測量其模型的GPU內存使用情況,並根據GPU卡的限制添加容器。請注意,爲簡潔起見,圖1中指定的Azure專屬塊的細節(示例服務總線接收器除外)沒有展示出來。Azure爲每個組件提供了大量文檔以及相關的Python示例實現。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"未來發展方向"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果大家看過了計算技術在20世紀的演變過程,那麼現在深度學習硬件研究中發生的事情可能會讓人感到非常熟悉。一個顯然容易實現的成果是翻譯器的小型化,我們可以儘量將計算核心的數量控制在單芯片的水平上。最後,行業嚴重依賴VLSI改進而不是算法開發。看到爲深度學習定製的硬件增長的速度如此之快,我們可能會期望21世紀復刻20世紀的歷史。另一方面,在雲中,無服務器加速的DL服務似乎是可以輕鬆摘取的果實。深度學習部署將進一步抽象化,按用量付費的初創公司將在不久的將來隨處可見。由於鬆散耦合架構帶來的靈活性,我們也可以合理預測鬆散耦合架構將減輕在此類初創公司工作的工程師的負擔,因此我們可能會看到許多針對鬆散耦合架構的新生開源項目。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在這篇文章中,我們描述了消息中介深度學習服務的四大好處:可控性、適應性、可觀察性和自動縮放性(成本效益較高)。除此之外,我還提供了一個模板代碼,可用於在Azure平臺上部署文中所描述的架構。應該強調的是,這種服務的靈活性在某些場景中可能是不切實際的,例如物聯網和嵌入式設備服務,在這些場景中組件的本地獨立性過重。然而,這裏提出的想法可以通過多種方式採用,例如我們可以使用低級C\/C+消息代理庫在資源約束平臺中創建類似的鬆散耦合架構,而不是使用雲消息服務(例如用於自動駕駛、物聯網需求)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"想在實踐中嘗試本文中的概念嗎?你可以"},{"type":"link","attrs":{"href":"https:\/\/github.com\/elras\/Loosely-Coupled-ML-Serving","title":"","type":null},"content":[{"type":"text","text":"在Github上找到文章隨附的代碼"}]},{"type":"text","text":"。"}]},{"type":"heading","attrs":{"align":null,"level":5},"content":[{"type":"text","text":"作者介紹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Sabri Bolkar"},{"type":"text","text":"是機器學習應用科學家和工程師,他對基於學習的系統從研發到部署和持續改進的整個生命週期感興趣。他在挪威科技大學學習計算機視覺,並在比利時魯汶大學完成了關於無監督圖像分割的碩士論文。在荷蘭代爾夫特理工大學攻讀博士學位後,他目前正在攻關電子商務行業面臨的大規模應用深度學習的挑戰。讀者可以通過他的"},{"type":"link","attrs":{"href":"https:\/\/dunk.ai\/","title":"","type":null},"content":[{"type":"text","text":"網站"}]},{"type":"text","text":"與他聯繫。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/www.infoq.com\/articles\/loosely-coupled-deep-learning-serving\/","title":"","type":null},"content":[{"type":"text","text":"Benefits of Loosely Coupled Deep Learning Serving"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章