談談 Kubernetes 的問題和侷限性

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2014 年發佈的 Kubernetes 在今天儼然已成爲容器編排領域的事實標準,相信談到 Kubernetes 的開發者都會一再複述上述現象。如下圖所示,今天的大多數個人或者團隊都會選擇 Kubernetes 管理容器,而也有 75% 的人會在生產環境中使用 Kubernetes。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/4a\/4a107ff4a0be8936127c81ea7ae4f390.webp","alt":"kube-in-prod","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 1 - Kubernetes 容器編排"},{"type":"sup","content":[{"type":"text","text":"1"}],"marks":[{"type":"strong"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在這種全民學習和使用 Kubernetes 的大背景下,我們也應該非常清晰地知道 Kubernetes 有哪些侷限性。雖然 Kubernetes 能夠解決容器編排領域的大多數問題,但是仍然有一些場景是它很難處理、甚至無法處理的,只有對這些潛在的風險有清晰的認識,才能更好地駕馭這項技術,這篇文章將從集羣管理和應用場景兩個部分談談 Kubernetes 社區目前的發展和一些侷限性。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"集羣管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"集羣是一組能夠在一起協同工作的計算機,我們可以將集羣中的所有計算機看成一個整體,所有資源調度系統都是以集羣爲維度進行管理的,集羣中的所有機器構成了資源池,這個巨大的資源池會爲待運行的容器提供資源執行計算任務,這裏簡單談一談 Kubernetes 集羣管理面對的幾個複雜問題。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"水平擴展性"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"集羣大小是我們在評估資源管理系統時需要關注的重要指標之一,然而 Kubernetes 能夠管理的集羣規模遠遠小於業界的其他資源管理系統。集羣大小爲什麼重要呢,我們先來看另一個同樣重要的指標 — 資源利用率,很多工程師可能沒有在公有云平臺上申請過資源,這些資源都相當昂貴,在 AWS 上申請一個與主機差不多配置的虛擬機實例(8 CPU、16 GB)每個月大概需要 150 美金,約爲 1000 人民幣"},{"type":"sup","content":[{"type":"text","text":"2"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/25\/254eb01d059268dc81f4d1796a87a5eb.png","alt":"aws-instance-pricing","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 2 - AWS EC2 價格"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大多數的集羣都會使用 48 CPU 或者 64 CPU 的物理機或者虛擬機作爲集羣中的節點,如果我們的集羣中需要包含 5,000 個節點,那麼這些節點每個月大概要 8,000,000 美元,約爲 50,000,000 人民幣,在這樣的集羣中"},{"type":"text","marks":[{"type":"strong"}],"text":"提升 1% 的資源利用率就相當於每個月節省了 500,000 的成本"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"多數在線任務的資源利用率都很低,更大的集羣意味着能夠運行更多的工作負載,而多種高峯和低谷期不同的負載部署在一起可以實現超售,這樣能夠顯著地提高集羣的資源利用率,如果單個集羣的節點數足夠多,我們在部署不同類型的任務時會有更合理的組合,可以完美錯開不同服務的高峯期。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes 社區對外宣傳的是單個集羣最多支持 5,000 節點,Pod 總數不超過 150,000,容器總數不超過 300,000 以及單節點 Pod 數量不超過 100 個"},{"type":"sup","content":[{"type":"text","text":"3"}]},{"type":"text","text":",與幾萬節點的 Apache Mesos 集羣、50,000 節點的微軟 YARN 集羣"},{"type":"sup","content":[{"type":"text","text":"4"}]},{"type":"text","text":"相比,Kubernetes 的集羣規模整整差了一個數量級。雖然阿里雲的工程師也通過優化 Kubernetes 的各個組件實現了 5 位數的集羣規模,但是與其他的資源管理方式相比卻有比較大的差距"},{"type":"sup","content":[{"type":"text","text":"5"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a8\/a8fc02adac33ccda98ee860c6cb93595.jpeg","alt":"Apache-Mesos-vs-Hadoop-YARN","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 3 - Apache Mesos 與 Hadoop YARN"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要注意的是 Kubernetes 社區雖然對外宣稱單集羣可以支持 5,000 節點,同時社區也有各種各樣的集成測試保證每個改動都不會影響它的伸縮性"},{"type":"sup","content":[{"type":"text","text":"6"}]},{"type":"text","text":",但是 Kubernetes 真的非常複雜,我們沒有辦法保證你使用的每個功能在擴容的過程中都不出問題。而在生產環境中,我們甚至可能在集羣擴容到 1000 ~ 1500 節點時遇到瓶頸。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每個稍具規模的大公司都想要實現更大規模的 Kubernetes 集羣,但是這不是一個改幾行代碼就能解決的簡單問題,它可能需要我們限制 Kubernetes 中一些功能的使用,在擴容的過程中,etcd、API 服務器、調度器以及控制器都有可能出現問題。社區中已經有一些開發者注意到了其中的一些問題,例如在節點上增加緩存降低 API 服務器的負載"},{"type":"sup","content":[{"type":"text","text":"7"}]},{"type":"text","text":",但是要推動類似的改變還是很困難的,有志之士可以嘗試在社區推動類似的項目。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"多集羣管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"單個集羣的容量再大也無法解決企業面對的問題,哪怕有一天 Kubernetes 集羣可以達到 50,000 節點的規模,我們仍然需要管理多個集羣,多集羣管理也是 Kubernetes 社區目前正在探索的方向,社區中的多集羣興趣小組(SIG Multi-Cluster)目前就在完成相關的工作"},{"type":"sup","content":[{"type":"text","text":"8"}]},{"type":"text","text":"。在作者看來,Kubernetes 的多集羣會帶來資源不平衡、跨集羣訪問困難以及提高運維和管理成本三大問題,我們在這裏談一談目前在開源社區和業界幾種可供參考和選擇的解決方案。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"kubefed"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先要介紹的是 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes-sigs\/kubefed","title":null,"type":null},"content":[{"type":"text","text":"kubefed"}]},{"type":"text","text":",該項目是 Kubernetes 社區給出的解決方案,它同時提供了跨集羣的資源和網絡管理的功能,社區的多集羣興趣小組(SIG Multi-Cluster)負責了該項目的開發工作:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/e4\/e41b207e49048effa22cf003a18cca10.png","alt":"kubernetes-federation","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 4 - Kubernetes 聯邦"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"kubefed 通過一箇中心化的聯邦控制面板管理多集羣中的元數據,上層的控制面板會爲管理器羣中的資源創建對應的聯邦對象,例如:"},{"type":"codeinline","content":[{"type":"text","text":"FederatedDeployment"}]},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"kind: FederatedDeployment\n...\nspec:\n ...\n overrides:\n # Apply overrides to cluster1\n - clusterName: cluster1\n clusterOverrides:\n # Set the replicas field to 5\n - path: \"\/spec\/replicas\"\n value: 5\n # Set the image of the first container\n - path: \"\/spec\/template\/spec\/containers\/0\/image\"\n value: \"nginx:1.17.0-alpine\"\n # Ensure the annotation \"foo: bar\" exists\n - path: \"\/metadata\/annotations\"\n op: \"add\"\n value:\n foo: bar\n # Ensure an annotation with key \"foo\" does not exist\n - path: \"\/metadata\/annotations\/foo\"\n op: \"remove\"\n # Adds an argument `-q` at index 0 of the args list\n # this will obviously shift the existing arguments, if any\n - path: \"\/spec\/template\/spec\/containers\/0\/args\/0\"\n op: \"add\"\n value: \"-q\""}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上層的控制面板會根據聯邦對象 "},{"type":"codeinline","content":[{"type":"text","text":"FederatedDeployment"}]},{"type":"text","text":" 的規格文件生成對應的 "},{"type":"codeinline","content":[{"type":"text","text":"Deployment"}]},{"type":"text","text":" 並推送到下層的集羣,下層集羣可以正常根據 "},{"type":"codeinline","content":[{"type":"text","text":"Deployment"}]},{"type":"text","text":" 中的定義創建特定數量的副本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3f\/3faf9d3ce2d17099a32ee6fa21b4a4e4.png","alt":"kubernetes-federated-resources","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 5 - 從聯邦對象到普通對象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"FederatedDeployment"}]},{"type":"text","text":" 只是一種最簡單的分發策略,在生產環境中我們希望通過聯邦的集羣實現容災等複雜功能,這時可以利用 "},{"type":"codeinline","content":[{"type":"text","text":"ReplicaSchedulingPreference"}]},{"type":"text","text":" 在不同集羣中實現更加智能的分發策略:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"apiVersion: scheduling.kubefed.io\/v1alpha1\nkind: ReplicaSchedulingPreference\nmetadata:\n name: test-deployment\n namespace: test-ns\nspec:\n targetKind: FederatedDeployment\n totalReplicas: 9\n clusters:\n A:\n minReplicas: 4\n maxReplicas: 6\n weight: 1\n B:\n minReplicas: 4\n maxReplicas: 8\n weight: 2"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述調度的策略可以實現工作負載在不同集羣之間的權重,在集羣資源不足甚至出現問題時將實例遷移到其他集羣,這樣既能夠提高服務部署的靈活性和可用性,基礎架構工程師也可以更好地平衡多個集羣的負載。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以認爲 kubefed 的主要作用是將多個鬆散的集羣組成強耦合的聯邦集羣,並提供更加高級的網絡和部署功能,這樣我們可以更容易地解決集羣之間資源不平衡和連通性的一些問題,然而該項目的關注點不包含集羣生命週期的管理,"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"集羣接口"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/cluster-api.sigs.k8s.io\/","title":null,"type":null},"content":[{"type":"text","text":"Cluster API"}]},{"type":"text","text":" 也是 Kubernetes 社區中與多集羣管理相關的項目,該項目由集羣生命週期小組(SIG Cluster-Lifecycle)負責開發,其主要目標是通過聲明式的 API 簡化多集羣的準備、更新和運維工作,你在該項目的 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes-sigs\/cluster-api\/blob\/master\/docs\/scope-and-objectives.md","title":null,"type":null},"content":[{"type":"text","text":"設計提案"}]},{"type":"text","text":" 中能夠找到它的職責範圍"},{"type":"sup","content":[{"type":"text","text":"9"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/74\/7402be811259a537a1ed20de054ec9a7.png","alt":"k8s-cluster-api-concepts","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 6 - Cluster API 概念"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在該項目中最重要的資源就是 "},{"type":"codeinline","content":[{"type":"text","text":"Machine"}]},{"type":"text","text":",它表示一個 Kubernetes 集羣中的節點。當該資源被創建時,特定提供商的控制器會根據機器的定義初始化並將新的節點註冊到集羣中,在該資源被更新或者刪除時,也會執行操作達到用戶的狀態。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這種策略與阿里的多集羣管理的方式有一些相似,它們都使用聲明式的 API 定義機器和集羣的狀態,然後使用 Kubernetes 原生的 Operator 模型在更高一層的集羣中管理下層集羣,這能夠極大降低集羣的運維成本並提高集羣的運行效率"},{"type":"sup","content":[{"type":"text","text":"10"}]},{"type":"text","text":",不過類似的項目都沒有考慮跨集羣的資源管理和網絡管理。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"應用場景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們在這一節將談談 Kubernetes 中一些有趣的應用場景,其中包括應用分發方式的現狀、批處理調度任務以及硬多租戶在集羣中的支持,這些是社區中比較關注的問題,也是 Kubernetes 目前的盲點。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"應用分發"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes 主項目提供了幾種部署應用的最基本方式,分別是 "},{"type":"codeinline","content":[{"type":"text","text":"Deployment"}]},{"type":"text","text":"、"},{"type":"codeinline","content":[{"type":"text","text":"StatefulSet"}]},{"type":"text","text":" 和 "},{"type":"codeinline","content":[{"type":"text","text":"DaemonSet"}]},{"type":"text","text":",這些資源分別適用於無狀態服務、有狀態服務和節點上的守護進程,這些資源能夠提供最基本的策略,但是它們無法處理更加複雜的應用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a7\/a776573fafc2bc32961b35a8ba28f116.png","alt":"kubernetes-basic-resources","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 7 - Kubernetes 應用管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着 CRD 的引入,目前社區的應用管理小組(SIG Apps)基本不會向 Kubernetes 主倉庫引入較大的改動,大多數的改動都是在現有資源上進行的修補,很多常見的場景,例如只運行一次的 DaemonSet"},{"type":"sup","content":[{"type":"text","text":"11"}]},{"type":"text","text":" 以及金絲雀和藍綠部署等功能,現在的資源也存在很多問題,例如 StatefulSet 在初始化容器中卡住無法回滾和更新"},{"type":"sup","content":[{"type":"text","text":"12"}]},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以理解社區不想在 Kubernetes 中維護更多的基本資源,通過幾個基本的資源可以覆蓋 90% 的場景,剩下的各種複雜場景可以讓其他社區通過 CRD 的方式實現。不過作者認爲如果社區能夠在上游實現更多高質量的組件,這對於整個生態都是很有價值並且很重要的工作,需要注意的是假如各位讀者想要在 Kubernetes 項目中成爲貢獻者,SIG Apps 可能不是一個很好的選擇。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"批處理調度"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"機器學習、批處理任務和流式任務等工作負載的運行從 Kubernetes 誕生第一天起到今天都不是它的強項,大多數的公司都會使用 Kubernetes 運行在線服務處理用戶請求,用 Yarn 管理的集羣運行批處理的負載。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/8d\/8dc164dfb1e6c56a59145471d97a005f.png","alt":"hadoop-yarn","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 8 - Hadoop Yarn"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在線任務和離線任務往往是兩種截然不同的作業,大多數的在線任務都是無狀態的服務,它們可以在不同機器上進行遷移,彼此很難有極強的依賴關係;但是很多離線任務的拓撲結構都很複雜,有些任務需要多個作業一同執行,而有些任務需要按照依賴關係先後執行,這種複雜的調度場景在 Kubernetes 中比較難以處理。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Kubernetes 調度器引入調度框架之前,所有的 Pod 在調度器看來是沒有任何關聯的,不過有了調度框架,我們可以在調度系統中實現更加複雜的調度策略,例如保證一組 Pod 同時調度的 PodGroup"},{"type":"sup","content":[{"type":"text","text":"13"}]},{"type":"text","text":",這對於 Spark 和 TensorFlow 任務非常有用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"# PodGroup CRD spec\napiVersion: scheduling.sigs.k8s.io\/v1alpha1\nkind: PodGroup\nmetadata:\n name: nginx\nspec:\n scheduleTimeoutSeconds: 10\n minMember: 3\n---\n# Add a label `pod-group.scheduling.sigs.k8s.io` to mark the pod belongs to a group\nlabels:\n pod-group.scheduling.sigs.k8s.io: nginx"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Volcano 也是在 Kubernetes 上構建的批處理任務管理系統"},{"type":"sup","content":[{"type":"text","text":"14"}]},{"type":"text","text":",它能夠處理機器學習、深度學習以及其他大數據應用,可以支持包括 TensorFlow、Spark、PyTorch 和 MPI 在內的多個框架。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/13\/13e08c54db7c2cfe08d06bf79c28dfa5.png","alt":"volcano-logo","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 9 - Volcano"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然 Kubernetes 能夠運行一些批處理任務,但是距離在這個領域上取代 Yarn 等老牌資源管理系統上還有非常大的差距,相信在較長的一段時間內,大多數公司都會同時維護 Kubernetes 和 Yarn 兩種技術棧,分別管理和運行不同類型的工作負載。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"硬多租戶"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"多租戶是指同一個軟件實例可以爲不同的用戶組提供服務,Kubernetes 的多租戶是指多個用戶或者用戶組使用同一個 Kubernetes 集羣,今天的 Kubernetes 還很難做到硬多租戶支持,也就是同一個集羣的多個租戶不會相互影響,也感知不到彼此的存在。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"硬多租戶在 Kubernetes 中是一個很重要、也很困難的課題,合租公寓就是一個典型的多租戶場景,多個租客共享房屋內的基礎設施,硬多租戶要求多個訪客之間不會相互影響,你可以想象這有多麼困難,Kubernetes 社區甚至有一個工作小組專門討論和研究相關的問題"},{"type":"sup","content":[{"type":"text","text":"15"}]},{"type":"text","text":",然而雖然感興趣的工程師很多,但是成果卻非常有限。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/c7\/c71fd6606c7ad96585c74e6232f06bc0.jpeg","alt":"multi-tenant","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"圖 10 - 多租戶"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"儘管 Kubernetes 使用命名空間來劃分虛擬機羣,然而這也很難實現真正的多租戶。多租戶的支持到底有哪些作用呢,這裏簡單列幾個多租戶帶來的好處:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes 帶來的額外部署成本對於小集羣來說非常高昂,穩定的 Kubernetes 集羣一般都需要至少三個運行 etcd 的主節點,如果大多數的集羣都是小集羣,這些額外的機器會帶來很高的額外開銷;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes 中運行的容器可能需要共享物理機和虛擬機,一些開發者可能在公司內部遇到過自己的服務被其他業務影響,因爲主機上容器可能隔離了 CPU 和內存資源,但是沒有隔離 I\/O、網絡 和 CPU 緩存等資源,這些資源的隔離是相對困難的;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果 Kubernetes 能夠實現硬多租戶,這不僅對雲服務商和小集羣的使用者來說都是個福音,它還能夠隔離不同容器之間的影響並防止潛在安全問題的發生,不過這在現階段還是比較難實現的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"總結"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每個技術都有自己的生命週期,越底層的技術生命週期會越長,而越上層的技術生命週期也就越短,雖然 Kubernetes 是當今容器界的扛把子,但是未來的事情沒有人可以說的準。我們要時刻清楚手中工具的優點和缺點,花一些時間學習 Kubernetes 中設計的精髓,不過如果在未來的某一天 Kubernetes 也成爲了過去,我們也應該感到喜悅,因爲會有更好的工具取代它。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes and Container Security and Adoption Trends "},{"type":"link","attrs":{"href":"https:\/\/www.stackrox.com\/kubernetes-adoption-security-and-market-share-for-containers\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/www.stackrox.com\/kubernetes-adoption-security-and-market-share-for-containers\/"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:1","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"AWS Pricing Calculator "},{"type":"link","attrs":{"href":"https:\/\/calculator.aws\/#\/createCalculator\/EC2","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/calculator.aws\/#\/createCalculator\/EC2"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:2","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Considerations for large clusters "},{"type":"link","attrs":{"href":"https:\/\/kubernetes.io\/docs\/setup\/best-practices\/cluster-large\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/kubernetes.io\/docs\/setup\/best-practices\/cluster-large\/"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:3","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"How Microsoft drives exabyte analytics on the world’s largest YARN cluster "},{"type":"link","attrs":{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/how-microsoft-drives-exabyte-analytics-on-the-world-s-largest-yarn-cluster\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/azure.microsoft.com\/en-us\/blog\/how-microsoft-drives-exabyte-analytics-on-the-world-s-largest-yarn-cluster\/"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:4","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"備戰雙 11!螞蟻金服萬級規模 K8s 集羣管理系統如何設計? "},{"type":"link","attrs":{"href":"https:\/\/www.sofastack.tech\/blog\/ant-financial-managing-large-scale-kubernetes-clusters\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/www.sofastack.tech\/blog\/ant-financial-managing-large-scale-kubernetes-clusters\/"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:5","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"sig-scalability-kubemark dashboard "},{"type":"link","attrs":{"href":"https:\/\/testgrid.k8s.io\/sig-scalability-kubemark#kubemark-5000","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/testgrid.k8s.io\/sig-scalability-kubemark#kubemark-5000"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:6","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","text":"Node-local API cache #84248 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/84248","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/84248"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:7","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":8,"align":null,"origin":null},"content":[{"type":"text","text":"Multicluster Special Interest Group "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-multicluster","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-multicluster"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:8","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":9,"align":null,"origin":null},"content":[{"type":"text","text":"Cluster API Scope and Objectives "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes-sigs\/cluster-api\/blob\/master\/docs\/scope-and-objectives.md","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes-sigs\/cluster-api\/blob\/master\/docs\/scope-and-objectives.md"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:9","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":10,"align":null,"origin":null},"content":[{"type":"text","text":"Demystifying Kubernetes as a service – How Alibaba cloud manages 10,000s of Kubernetes clusters "},{"type":"link","attrs":{"href":"https:\/\/www.cncf.io\/blog\/2019\/12\/12\/demystifying-kubernetes-as-a-service-how-does-alibaba-cloud-manage-10000s-of-kubernetes-clusters\/","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/www.cncf.io\/blog\/2019\/12\/12\/demystifying-kubernetes-as-a-service-how-does-alibaba-cloud-manage-10000s-of-kubernetes-clusters\/"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:10","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":11,"align":null,"origin":null},"content":[{"type":"text","text":"Run job on each node once to help with setup #64623 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/64623","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/64623"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:11","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":12,"align":null,"origin":null},"content":[{"type":"text","text":"StatefulSet does not upgrade to a newer version of manifests #78007 "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/78007","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/78007"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:12","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":13,"align":null,"origin":null},"content":[{"type":"text","text":"Coscheduling based on PodGroup CRD "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes-sigs\/scheduler-plugins\/tree\/master\/kep\/42-podgroup-coscheduling","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes-sigs\/scheduler-plugins\/tree\/master\/kep\/42-podgroup-coscheduling"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:13","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":14,"align":null,"origin":null},"content":[{"type":"text","text":"Volcano · A Kubernetes Native Batch System "},{"type":"link","attrs":{"href":"https:\/\/github.com\/volcano-sh\/volcano","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/volcano-sh\/volcano"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:14","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":15,"align":null,"origin":null},"content":[{"type":"text","text":"Kubernetes Working Group for Multi-Tenancy "},{"type":"link","attrs":{"href":"https:\/\/github.com\/kubernetes-sigs\/multi-tenancy","title":null,"type":null},"content":[{"type":"text","text":"https:\/\/github.com\/kubernetes-sigs\/multi-tenancy"}]},{"type":"text","text":" "},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/#fnref:15","title":null,"type":null},"content":[{"type":"text","text":"↩︎"}]}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":16,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":16,"align":null,"origin":null},"content":[{"type":"text","text":"本文轉載自:"},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/","title":"xxx","type":null},"content":[{"type":"text","text":"面向信仰編程"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":16,"align":null,"origin":null},"content":[{"type":"text","text":"原文鏈接:"},{"type":"link","attrs":{"href":"https:\/\/draveness.me\/kuberentes-limitations\/","title":"xxx","type":null},"content":[{"type":"text","text":"談談 Kubernetes 的問題和侷限性"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章