以微博核心業務爲例,解讀如何僅用1臺服務器支持百萬DAU

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近些年,各家公司都在不斷推出各種新的 App,百萬 DAU 成爲各種 App 的最基本目標。本文將詳解如何通過大規格服務器 +K8s 的方案簡化這些新項目的成本評估、服務部署等管理工作,並在流量增長時進行快速擴容。同時,本文還介紹了微博核心業務採用此方案部署時遇到的問題以及對應的解決方案。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"問題與挑戰"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以一個常見的社交 App 後端服務爲例,如果採用主流微服務架構進行設計,通常會包含用戶、關係、內容、提醒、消息等多個模塊;每個模塊又會分別包含各自的 Web 接口服務、內部 RPC 服務、隊列處理機等幾部分;同時爲了保證高可用,通常每個模塊還需要部署 2 個及以上的實例,算下來僅部署上述列出的應用服務就超過 30 多個實例。而對於依賴的數據庫和緩存,甚至每個業務功能都需要部署獨立的實例,若再考慮讀寫分離、預留分庫分表等,則 App 上線前需要部署的數據庫和緩存實例可能多達上百個。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/ae\/ae74ee39eb03ac6ba8154879c2a82b89.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"(常見的社交 App 後端架構)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於上述這樣一個典型的 App,如果採用傳統的部署模式,則需要使用至少數十臺服務器才能滿足部署數十個應用服務實例以及上百個數據庫和緩存實例的需求。若要對整個服務佔用資源情況進行全面瞭解,或是對整個服務進行翻倍擴容,則還需要梳理清楚各個服務的依賴和部署情況,管理複雜度高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而如果還要進一步提升服務器的利用率,可能還需要使用不同規格的服務器部署不同的實例,或是將新的實例與其他業務或者已有集羣混部。在這種情況下,不論是服務部署、成本評估還是擴容縮容都變得相當困難,甚至連 App 下線時都需要做很多盤點和梳理工作才能完成。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"用一臺服務器解決問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於任意一個業務,顯然其使用的服務器規模越小、使用的服務器規格越少、提供和依賴的服務數量越少,其服務部署、成本評估以及擴容縮容就會更加容易。而對於一個新業務的後端服務,若其提供和依賴的服務數量沒辦法減少的話,減小服務器的規模、減少使用的服務器規格也是一種解決問題的方法。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"選擇合適規格的服務器"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在使用 Kubernetes 之前,出於運維複雜度的考慮,單臺服務器常常只部署一個服務。而由於各自成本的原因,公司在爲自建 IDC 採購服務器時通常都會指定標準規格的服務器,雲廠商在爲大家提供服務器時也是按預先設定的規格來提供。所以,不論是業務的應用服務還是數據庫、緩存等資源,往往都是選擇能滿足需求的最小實例,這也就導致了業務需要大量不同規格的服務器,提升了管理的複雜度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"儘管如此,業務的應用服務或者數據庫、緩存資源的單個實例還是很難將單臺服務器完全用滿。導致這種現象的原因很簡單,即需求的服務器規格與實際提供的服務器規格不一致。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"舉個例子,服務器廠商與雲廠商提供的服務器的 CPU 核心數(指超線程後的核心數,下同)與內存容量(GB)的比例(以下簡稱“CPU 與內存比”)通常在 1:2 - 1:8 之間,如果業務實例不進行混部,在實際使用過程中通常會遇到如下情況:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"需要的 CPU 與內存比過小。很多服務會通過使用大量緩存來支撐更多流量,單個用途的緩存經常需要 10GB 甚至數十 GB 不等的內存空間,而支持每秒數千甚至數萬的緩存請求可能只需要 1 個 CPU 核心,需要的 CPU 與內存比能達到 1:16 甚至是 1:32。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"需要的 CPU 與內存比不標準。對於應用服務來說,由於各業務實現邏輯均不相同,需要的 CPU 核心數和內存可能不盡相同,例如 6 核 8GB、8 核 12GB、12 核 16GB,而即使將多種不同的規格合併標準化後也可能會出現 1:1.33 或是 1:1.5 等比例。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"諸如上述 CPU 與內存比是 1:1.33 或 1:32 的非常規比例,服務器廠商或雲廠商通常都不會提供。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"雖然不同服務對 CPU 和內存的需求不同,但一個業務所需的 CPU 與內存的總和是確定的,所以我們可以據此選擇更大規格的服務器,然後再切分出各個服務需要的規格。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"切分的方式也有很多,以往通常會採用虛擬化的方案進行切分,但虛擬化也伴隨着額外的性能及資源開銷,Hypervisor、Guest OS 等等都需要消耗額外的 CPU 和內存,在虛擬機中運行應用的性能也比在宿主機中直接運行的性能要差一些。容器的出現使得大家有了更輕量級的服務隔離方案,減少了因虛擬化帶來的 CPU 和內存開銷,而 Kubernetes 的出現又使得容器的編排、調度和部署變得更爲容易。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過 Kubernetes 將不同的緩存、數據庫、應用服務進行搭配混部,很容易就能將單臺服務器的 CPU 和內存都充分利用起來。而通過搭配選擇多種 CPU 核心數與內存比的服務器,又能滿足業務對計算資源和內存的不同需求。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以微博爲例,我們的新 App 後端服務由數十個 Java 應用實例構成,使用了數百 GB 的 Memcached、Redis 等緩存,最終我們選擇了一臺 104 核 768GB 內存的雲上裸金屬服務器,單臺服務器就可以滿足整個業務的需求。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"可用性的問題"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"與傳統將服務部署在多臺小規格服務器中不同,將全部服務部署在幾臺大規格服務器甚至單臺服務器上,可用性會明顯降低。當單臺服務器出現故障時,影響的不再是某個服務或是 App 中的某個功能,而是整個 App。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"業務對服務可用性的要求一定是隨着業務重要性提升而不斷提高的。在業務初上線時,雙機互備或單機 + 快速遷移的模式都是可選的方案。但隨着業務不斷髮展,對可用性的要求會不斷提升。與此同時,伴隨着業務發展和服務規模擴大、業務需求與功能不斷迭代,應用服務、數據庫以及緩存的種類和數量也會不斷變多,一定會出現單臺服務器無法部署所有服務的情況,各個服務實際需求的資源和數量也會變得與最初不同。屆時,我們需要根據資源的實際需求重新分配和部署資源,這在一定程度上能夠降低單臺服務器故障對整體的影響。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"快速翻倍擴容"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 App 剛上線時,流量的增長並不總是呈線性增長的,很多時候會受用戶傳播或推廣等因素影響而出現增長速度明顯加快甚至流量翻倍的情況。同時,由於新 App 存在不確定性,也不會像公司裏已有的成熟 App 那樣,能夠比較容易地預判接下來的流量情況。所以對於一個新 App 而言,更需要能進行快速和全面的擴容。在新 App 用戶量快速上升的階段,快速翻倍擴容是最有效也是最穩妥的手段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"與傳統分散部署的方式相比,單臺集中部署的方式能讓我們更清晰地瞭解整個服務所需的服務器資源。以這種方案進行部署時,翻倍擴容僅需要將之前服務器上部署的所有服務在新服務器上再部署一次即可。其中根據服務的不同類型,主要分爲以下三種情況:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"無狀態的服務,如業務自身的 Web 服務、RPC 服務、隊列處理機等,只要配置好上游的負載均衡或者配置好對應的服務發現即可。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有狀態但自身提供主從同步的服務,如 MySQL、Redis 等數據庫或緩存,可以通過其自身的主從同步機制進行相關的同步並完成從庫的搭建。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有狀態但自身不提供主從同步的服務,如微博使用最廣泛的 Memcached 緩存,其自身並沒有提供主從、分片以及高可用等邏輯。微博以 Memcached 爲基礎提供了 Cache Service 服務,在 Cache Service 內部實現了主從、分片和高可用的邏輯,在業務自身的隊列處理機中對 Memcached 的各個實例進行數據同步。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/d2\/d2855674581d18756d1b4fdbfdd63289.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"(服務器翻倍擴容後的調用關係)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"按照翻倍擴容的方式,App 上線前期能夠及時支撐 App 流量的快速增長,同時讓物理服務器數量依舊維持在較少的規模,管理複雜度依然維持在較低的水平。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"新的挑戰"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最初使用這套方案時,我們選擇了 104 核 768GB 內存的雲上裸金屬服務器,雖然 App 的 DAU 超過百萬,但是 Web 服務的每秒接口請求量並不多。不論是 Web 服務、RPC 服務還是緩存等部署的實例個數都是根據最小規模部署個數確定的,與實際流量相比,壓測值的吞吐量要數倍於日常流量,整體運行一直也比較平穩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"後來我們部署了更大規模的集羣,在服務器選型上也選擇了單臺規格更大的 AMD 256 核 2TB 內存的服務器。隨着我們將核心業務遷到了新的集羣中,出現了比較明顯的業務抖動問題。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"綁核策略的優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"與 Intel 的機型相比,AMD 的服務器雖然也是 2 個 Socket,但由於 CPU 架構設計原因,實際上有 8 個距離不同的 NUMA 節點,跨 NUMA 訪問的開銷相比之前變大了。同時,由於部署了更多負載更高的業務,最早不綁核只限定 CPU、內存使用量的方式,也導致 CPU 調度的開銷也變得更大。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們在大規模部署和使用的時候分別經歷了 3 個階段:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"所有容器不綁核,僅限定 CPU、內存使用量;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在線應用容器綁核、緩存容器不綁核;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"在線應用容器綁核、多個緩存部署在同一個綁核的容器裏。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/17\/17302fa9695cbc7b715b9a183728dd66.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"(單臺物理機中容器部署架構)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於 Redis、Memcached 等緩存對 CPU 資源消耗較少,很多緩存實例甚至都用不滿 1 個核心,我們將多個緩存部署在同一個綁核容器中集中管理。當我們對在線應用服務和緩存等資源都進行了綁核後,業務抖動問題得到了解決。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了解決了業務性能抖動的問題外,這種綁核策略也降低了管理的複雜度。在宿主機使用了 lxcfs 後,容器內 CPU 與內存等的監控也接近於原有使用物理機或 ECS 的方式。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"調用鏈路的優化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨着部署了更多的核心服務,通過網絡對各種 RPC、緩存的請求數量也逐漸增多,從單臺服務器來看,網絡請求所帶來的開銷也越來越大。對於需要大量網絡請求 RPC、緩存的服務而言,如果能顯著減少網絡的請求,就能顯著減少接口的耗時。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在以往的部署模式下,由於各個 RPC、緩存等都是獨立進行部署的,並沒有有效的方式減少網絡請求的開銷,唯一能做的就是儘可能讓請求就近訪問。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而當我們使用的單臺服務器規格達到 256 核 2TB 內存時,我們發現可以在單臺服務器上部署 16 個 12 核的在線應用服務或是 24 個 8 核的在線應用服務或是 32 個 6 核的在線應用服務,單臺服務器能塞下的實例數量甚至比用 16 核 1U 的服務器裝滿一個機架時塞的還多。在這種情況下,如果部署的服務之間有相互依賴關係,就有必要對部署服務的編排方式、路由或是負載均衡策略進行優化,使相互依賴的服務儘可能同機部署,調用依賴的服務儘可能同機訪問。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除此之外,以往爲了減少跨網絡的數據傳輸量和傳輸時間,通常還會對跨網絡的服務請求和響應數據進行壓縮。這種方式雖然降低了網絡開銷和整體請求耗時,但也增加了請求雙方的 CPU 負載,並增加了一定的耗時。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/94\/94e14567f2bab82141eb2706563398ef.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"(優先調用同機內提供的服務)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們基於新的部署方式調整了部署後的負載均衡策略,在已有的負載均衡策略之前增加了一層本地優先策略。在 RPC 調用負載均衡策略時判斷目標實例列表中是否有本機部署的 Pod 的 IP,如果有則優先訪問。同時,由於本機內部的調用並沒有產生跨網絡傳輸的開銷,我們取消了對這種請求及其結果的壓縮,進一步降低了請求耗時。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在實際線上對比測試時,我們發現,很多耗時短、結果簡單的 RPC 接口,在採用了新的部署方式和負載均衡策略後,平均耗時比之前單獨部署時還能降低 2ms 及以上。而當單個請求流程中需要跨網絡調用更多種類和次數 RPC 的時候,整體平均耗時還能下降更多。例如我們很多接口在一次請求中需要調用不同的 RPC 接口數十次,從接口整體響應耗時來看,能降低 10ms - 30ms,接口整體耗時能下降 10% - 30%。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"後記"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正如前文中所述,目前我們搭建的 Kubernetes 集羣使用的物理機規格主要以 256 核 2TB 內存的服務器爲主,包括微博核心業務在內的很多在線應用服務和 Redis、Memcached 緩存等也已經穩定運行在這個集羣中。在這種規格的服務器上,由於單機可以部署更多數量和類型的服務,藉助基於本地優先調用的負載均衡策略,部分核心接口的平均耗時和 P99 時間相比傳統的部署方式,還下降了 10% 以上,在日常流量低的時間段甚至能下降 30%。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而且,由於擴大了集羣中的物理服務器規模,單臺物理機對單個服務的影響已經非常低。同時,伴隨着之前文章所述的數據恢復中心的使用,即使有狀態服務所在的物理機宕機,也可以在短時間內將實例重新遷移到集羣中其他宿主機上,快速重新部署、快速恢復,進一步減少對業務的影響。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"作者簡介"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"孫雲晨,微博基礎架構部業務改造負責人。2015 年加入微博,參與並負責微博多個業務系統架構升級改造工作。目前主要關注資源服務化及業務研發效率的提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"相關文章:"},{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/www.infoq.cn\/article\/GLeVJM7f6fMgYiNQDcIt","title":"xxx","type":null},"content":[{"type":"text","text":"業界前所未有:10分鐘部署十萬量級資源、1小時完成微博後端異地重建"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章