開發效率提升15倍!批流融合實時平臺在好未來的應用實踐

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文由好未來資深數據平臺工程師毛祥溢分享,主要介紹批流融合在教育行業的實踐。內容包括兩部分,第一部分是好未來在做實時平臺中的幾點思考,第二部分主要分享教育行業中特有數據分析場景。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大綱如下:"}]},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"背景介紹"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"好未來 T-Streaming 實時平臺"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"K12 教育典型分析場景"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"展望與規劃"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"1.背景介紹"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"好未來介紹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b2\/b2da451157f399df58a24a3ecba883ed.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"好未來是一家 2003 年成立教育科技公司,旗下有品牌學而思,現在大家聽說的學而思培優、學而思網校都是該品牌的衍生,2010 年公司在美國納斯達克上市,2013 年更名爲好未來。2016 年,公司的業務範圍已經覆蓋負一歲到 24 歲的用戶。目前公司主營業務單元有智慧教育、教育領域的開放平臺、K12 教育以及海外留學等業務。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"好未來數據中臺全景圖"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/15\/15b7b233fe68a6cea37488a96a5a0e0e.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖爲好未來數據中臺的全景圖,主要分爲三層:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一層是數據賦能層"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二層是全域數據層"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三層是數據開發層"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先,數據賦能層。主要是商業智能、智慧決策的應用,包括一些數據工具、數據能力以及專題分析體系,數據工具主要包括埋點數據分析工具、AB 測試工具、大屏工具;數據能力分析主要包括未來畫像服務、未來增長服務、未來用戶服務以及新校區的選址服務;專題分析體系主要包企業經營類專題分析等等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其次,數據全域層。我們期望將全集團所有的事業部的數據進行深入的拉通和融合,打通不同業務線、產品線的用戶池,從而盤活全集團的數據。具體的手段是  IDMapping,將設備 id、自然人、家庭三個層級的 id 映射關係挖掘出來,將不同產品上的用戶數據關聯起來。這樣就能夠形成一個打的用戶池,方便我們更好的賦能用戶。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,數據開發層。數據開發通過一些列的平臺承載了全集團所有的數據開發工程,主要包括數據集成、數據開發、數據質量、數據服務、數據治理等服務。我們今天要分享的實時平臺就是在數據開發中。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.好未來 T-Streaming 實時平臺"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺構建前的訴求"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/9f\/9fca03f415ff1bab479a6edb44e14a25.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"實時平臺在構建之初,我們梳理了四個重要的訴求:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個訴求是期望有一套"},{"type":"text","marks":[{"type":"strong"}],"text":"統一的集羣"},{"type":"text","text":",通過提供多租戶,資源隔離的方式提高資源利用率,解決多個事業部多套集羣的問題。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個訴求是期望通過平臺的方式"},{"type":"text","marks":[{"type":"strong"}],"text":"降低實時數據開發的門檻"},{"type":"text","text":",從而能夠覆蓋更多的開發者。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三個訴求是期望能夠提供"},{"type":"text","marks":[{"type":"strong"}],"text":"通用場景的解決解方案"},{"type":"text","text":",提高項目的複用性,避免每個事業部都開發相同場景的分析工具。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第四個訴求是對作業進行"},{"type":"text","marks":[{"type":"strong"}],"text":"全方位的生命週期管理"},{"type":"text","text":",包括元數據和血緣,一旦有一個作業出現異常,我們可以快速分析和定位影響範圍。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺功能概述"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/49\/49f6413b39822bed1ce7d97254a709ba.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"現在我們平臺已經是一個一站式的實時數據分析平臺,包括了數據集成、數據開發、作業保障、資源管理、數據安全等功能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在"},{"type":"text","marks":[{"type":"strong"}],"text":"數據集成"},{"type":"text","text":"方面,我們支持數據庫、埋點數據、服務端日誌數據的集成,爲了能夠提高數據集成的效率,我們提供了很多的通用模板作業,用戶只需要配置即可快速實現數據的集成。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在"},{"type":"text","marks":[{"type":"strong"}],"text":"數據開發"},{"type":"text","text":"方面,我們支持兩種方式的作業開發,一種是 Flink SQL 作業開發、一種是 Flink Jar 包託管,在 Flink SQL 開發上我們內置了很多 UDF 函數,比如可以通過 UDF 函數實現維表 join,也支持用戶自定義 UDF,並且實現了 UDF 的熱加載。"},{"type":"text","text":"除此之外,我們也會記錄用戶在作業開發過程中的元數據信息,方便血緣系統的建設。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在"},{"type":"text","marks":[{"type":"strong"}],"text":"作業保障"},{"type":"text","text":"方面,我們支持作業狀態監控、異常告警、作業失敗之後的自動拉起,作業自動拉起我們會自動選擇可用的 checkpoint 版本進行拉起,同時也支持作業在多集羣之間的切換。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在"},{"type":"text","marks":[{"type":"strong"}],"text":"資源管理"},{"type":"text","text":"方面,我們支持平臺多租戶,每個租戶使用 namespace 進行隔離、實現了不同事業部、不同用戶、不同版本的 Flink 客戶端隔離、實現了計算資源的隔離。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在"},{"type":"text","marks":[{"type":"strong"}],"text":"數據安全"},{"type":"text","text":"方面,我們支持角色權限管理、表級別權限管理、操作審計日誌查詢等功能。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是我們平臺的功能,在賦能業務的同時,我們也還在快速迭代中,期望平臺簡單好用,穩定可信賴。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺的批流融合"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/bf\/bfae238171264f492480b51732b3b494.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來說一下平臺建設中的一些實踐,第一個是批流融合。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們先理清楚批流融合是什麼?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"批流融合可以分爲兩個概念,一個是 Flink 提出的批流融合,具體的理解就是一個  Flink SQL 既可以作用於流數據、也可以作用於批數據,通過保證計算引擎一致從而減少結果數據的差異,這是一個技術層面上的批流融合。另個一概念是我們內部提出來的,那就是架構層面的批流融合。具體的操作手法就是通過 Flink 作業保證數據倉庫 ODS 層的實時化,然後提供小時級別、分鐘級別的調度,從而提高數據分析的實時化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲什麼我們會提出架構上的批流融合,主要我們看到行業發展的兩個趨勢。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個趨勢是數據集成的實時化和組件化,比如 Flink 集成 Hive、Flink CDC 的持續完善和增強,這樣我們做數據集成的時候就會變得非常簡單。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個趨勢是實時 OLAP  引擎越來越成熟,比如 Kudu+impala、阿里雲的 Hologres、湖倉一體的方案。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這兩個趨勢讓用戶開發實時數據會變得越來越簡單,用戶只需要關注 SQL 本身就可以。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如上圖所示,我們有三個類型的實時數倉,一個是基於 Hive 的、一個是基於實時 OLAP 引擎的、一個是基於 Kafka 的。其中,藍色線條就是我們 ODS 層實時化的具體實現。我們提供了一個統一的工具,可以將實時的將數據寫入到 Hive、實時 OLAP 引擎、當然還有 Kafka。這個工具使用起來比較簡單,如果是 MySQL 數據的同步,用戶只需要輸入數據庫名稱和表名就可以了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過 ODS 層實時化的工具,我們就可以在 Hive、實時 OLAP 引擎、Kafka 中構建實時數倉。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果是"},{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":"Hive 實時數倉"},{"type":"text","text":",我們會使用 Flink 將實時的增量數據寫入到 ODS 層,然後提供一個定時 merge 的腳本,用來 merge 增量數據和歷史數據,從而保證 ODS 層的數據是最新最全的。配合 airflow 小時級別的調度能力,用戶就可以得到一個小時級別的數倉了。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果是類似於"},{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":"Kudu \/ Hologres 這樣的實時 OLAP 引擎"},{"type":"text","text":",我們會先把離線數據從 Hive 中導入到實時 OLAP 引擎中,然後使用 Flink 將實時的增量數據寫入到 ODS  層,寫入的方式推薦使用 upsert 這樣的特性,這樣用戶就能夠得到一個純實時的數倉了。配合 airflow 分鐘級別的調度能力,用戶就可以得到一個分鐘級別的數倉了。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於"},{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":"Kafka 構建實時數倉"},{"type":"text","text":",就是非常經典的架構了,開發成本也比較高一些,除了必須要秒級更新的分析場景,我們不太建議用戶使用。當然在 2021 年的時候,我們也會去做 Flink 批流一體解決方案,讓用戶有更多選擇方式的同時,讓整個實時數倉變得更加簡單。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是我們對批流融合的思考和實踐,通過這種架構層面的批流融合,"},{"type":"text","marks":[{"type":"strong"}],"text":"原來需要開發一個月的實時需求,現在 2 天就差不多能完成"},{"type":"text","text":"。大大降低了開發實時數據的門檻,提高了數據分析的效率。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺 ODS 層實時化"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/1a\/1aff6f8df050bb36b2d1a2884378680f.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"說一下 ODS 層實時化我們具體是怎麼做的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"要想把 ODS 層數據實時化,我們需要解決兩個問題,第一個是離線數據的初始化問題,第二個是增量數據如何寫入的問題。離線數據導入比較好做,如果數據源是  MySQL,我們可以使用 DataX 或者 Spark 作業的方式將 MySQL 的全量數據導入到 Hive 中,而實時增量數據的寫入我們需要有兩個步驟,第一個步驟是將 MySQL  的 binlog 採集到 Kafka,第二個步驟是將 Kafka 的數據使用Flink作業導入到 Hive。這樣算下來,要解決 ODS 層實時化的問題,我們就需要一個離線初始化的作業,一個增量數據採集的作業,一個增量數據寫入的作業,也就是需要 3 個作業。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我們的平臺上,我們對 ODS 層的 3 個作業進行了封裝和統一調度,用戶只需要輸入一個數據庫名稱和表的名稱就能完成 ODS 層實時化的工作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是我們批流融合中 ODS 層實時化的實現過程。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺 Flink SQL 開發流程"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/09\/09a9ff935b17ff73c34414b04b3bb83b.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們另外一個實踐,就是對 Flink SQL 的作業封裝。先看一下,在我們平臺上進行 Flink SQL 開發的整體流程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從左往右看,數據源中的數據會通過 Maxwell、canal 這樣的工具採集到 Kafka,採集到 Kafka 的原始數據格式並不是統一的,所以我們需要將 Kafka 中的數據進行統一格式化處理,我們默認支持埋點數據格式、canal 數據格式、maxwell 數據的解析,也支持用戶自己上傳 Jar 包進行數據解析,解析得到的標準化數據就會再次發送到 Kafka。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後我們會使用 Flink SQL 作業來消費 Kafka 的數據,進行 SQL 腳本的開發。這裏的 SQL 腳本開發和原生的 Flink SQL 的腳本開發有一點不一樣,原生的 SQL 腳本開發用戶需要編寫 Source 信息、Sink 信息,在我們平臺上用戶只需要寫具體的 SQL 邏輯就可以了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那用戶寫完 SQL 之後,會將 SQL 作業信息提交到我們封裝好的 Flink SQL 執行作業上,最後通過我們封裝的 SQL 引擎將作業提交的 Flink 集羣上去運行。後面將介紹我們是怎麼封裝的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是在我們平臺上進行 Flink SQL 開發的流程,出了 Flink 作業本身的開發和提交,平臺也會保留與作業有關的各種輸入、輸出的 schema 信息。比如業務數據庫表的 schema 信息,經過同意加工之後的 schema 信息,數據輸出的表的 schema  信息,通過這些記錄,後期我們排查問題的時候就能夠快速梳理出作業的來龍去脈和影響範圍。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺 Flink SQL 開發過程"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/28\/28caff5968053a37b975c88742780ce3.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我們平臺上開發 Flink SQL 作業,只需要三個步驟:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個步驟確認 Kafka 的  Topic 是否已經註冊過了,如果沒有註冊就需要用戶手動註冊下,完成註冊後,我們會把 Topic 的數據解析出來,將字段信息保存起來。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二步使用戶編寫 SQL,剛纔說過,用戶只需要寫具體的 SQL 邏輯,不需要寫"},{"type":"text","text":" "},{"type":"text","text":"Source"},{"type":"text","text":" "},{"type":"text","text":"和"},{"type":"text","text":" "},{"type":"text","text":"S"},{"type":"text","text":"ink"},{"type":"text","text":" "},{"type":"text","text":"信息。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三步是用戶指定將數據輸出到哪裏,現在平臺可以支持同時指定多個 Sink 存儲設備,比如將計算好的數據同時寫入到 Hive、Holo 等存儲。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過以上三個步驟的配置,用戶就可以提交作業了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來說一下,我們是怎麼做的,我把整個執行過程分爲 2 個階段 10 個步驟。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個階段就是作業準備階段,第二個階段就是 SQL 執行階段。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"■ 作業準備階段"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步,用戶在頁面數據 SQL 和指定 Sink 信息。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二步,SQL 解析及校驗過程,當用戶提交 SQL 時,我們會對 SQL 進行解析,看看 SQL 中用到的  Source 表和 UDF 是否在平臺中註冊過。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三步,推測建表,我們會先運用下用戶的 SQL,然後得到 SQL 的返回結果,根據結果數據生成一些建表語句,最後通過程序自動到目標 Sink 存儲上去建表。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第四步,拼裝 Flink SQL 的腳本文件,得到一個有 Source、SQL、Sink 三要素的腳本文件。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第五步,作業提交,這裏會把 Flink SQL 文件提交到我們自己執行引擎中。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"■ "},{"type":"text","marks":[{"type":"strong"}],"text":"SQL 執行階段"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步是會初始化 StreamTableAPI,然後使用 connect 方法註冊  Kafka Source,Kafka 的 Source 信息需要指定數據解析的規則和字段的 schema 信息,我們會根據元數據自動生成。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二步是使用 StreamTableAPI 註冊  SQL 中使用到的維表和 UDF 函數,UDF 函數包括用戶自己上傳的 UDF 函數。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三步是使用 StreamTable API 執行 SQL 語句,如果有視圖也可以執行視圖。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第四步是一個比較關鍵的步驟,我們會把 StreamTabAPI 轉成 DataStream API。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第五步就是在 DataStream 的基礎上 addSink 信息了。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上是兩個階段的執行過程,通過第二個階段,用戶的 SQL 作業就會真正的運行起來。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺原生作業與模板任務"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/36\/36c29e90fc57c34abe5859bf99b2b045.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面分享了我們的 Flink SQL 作業如何開發和運行,接下來說一下我們平臺對 JAR 包類型作業的支持。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我們平臺上,我們支持用戶自己上傳 JAR 包作業,然後在我們平臺上進行管理。與此同時,爲了提高代碼通常場景的複用性,我們開發了很多模板作業,比如支持  Maxwell 採集的 binlog 直接寫入到"},{"type":"text","text":" "},{"type":"text","text":"H"},{"type":"text","text":"ive"},{"type":"text","text":"、Kudu、Holo 等存儲設備,支持阿里雲 SLS 日誌寫入到各種 OLAP 引擎。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺混合雲部署方案"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/bd\/bdf2347d9251baf25aa36b757e9a54de.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"講一下混合雲部署方案和平臺技術架構。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們平臺現在支持將作業提交到阿里雲機房、自建機房中,並且作業可以在兩個機房中來回切換。爲了要有這個功能呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"今年年初,隨着疫情的爆發,互聯網在線教育湧入了大量的流量,爲了應對暴增的流量,春節期間我們採購了上千臺機器進行緊急的部署和上線,後來疫情穩定住了之後,這些機器的利用率就比較低了,爲了解決這個問題,我們平臺就支持了混合雲部署方案,高峯期的時候作業可以遷移到阿里雲上運行,平常就在自己的集羣上運行,既節約了資源又保證了彈性擴容。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時平臺技術架構"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a7\/a70dd4de72d37d312202655042200f6c.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來說一下平臺的技術架構。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們是一個前後端分離的項目,前端使用 vue+elmentui、服務端使用 springboot,不同的機房裏面我們會部署一個後端服務的實例。任務提交到不同的機房主要通過轉發層的 nginx+lua 來實現的。平臺上任務的提交、暫停、下線操作,都是通過驅動層來完成的,驅動層主要是一些 shell 腳本。最後就是客戶端了,在客戶端上我們做了 Namespace\/用戶\/Flink 版本的隔離。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"3.K12 教育典型分析場景"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"續報業務介紹"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/fc\/fc9e521ef6a9acc14b2c9e996bf7c76d.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們聊一個具體的案例,案例是 K12 教育行業中典型的分析場景,用戶續報業務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先說下什麼是續報,續報就是重複購買,用戶購買了一年的課程,我們期望用戶購買二年的課程。爲了用戶購買課程,我們會有一個集中的時間段用來做續報,每次持續一週左右,一年四次。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因爲續報週期比較集中,時間比較短暫,每次做續報業務老師對實時續報數據的需求就特別迫切。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲此我們做了一個通用的續報解決方案,來支持各事業部的續報動作。"},{"type":"text","text":"要做實時續報,有幾個挑戰。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個挑戰是計算一個用戶的訂單是否是續報,需要依賴這個用戶歷史上所有的訂單,也就是需要歷史數據參與計算。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個挑戰就是一個訂單的變化會影響其它訂單的變化,是一個連鎖效應。比如用戶有 5 個訂單,編號爲 345 的訂單都是續報狀態,如果用戶取消了編號爲 3 的訂單,訂單 4 和訂單5的續報狀態就需要重新計算。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三個挑戰是維度變化很頻繁,比如用戶上午的分校狀態是北京,下午的分校狀態可能就是上海,上午的輔導老師是張三,下午的輔導老師就是李四,頻繁變化的維度給實時彙總數據帶來了挑戰。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"依賴歷史數據、訂單改變的連鎖效應、頻繁變化的維度,這些挑戰如果單個看都不算什麼,如果放在一起就會變得比較有意思了。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時續報解決方案"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/9f\/9f94f98809d277ed159c4ded7e52a07c.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先說下整體架構,我們採用的批流融合方式來做的,分成兩條線,一條線是分鐘級實時續報數據計算,一條是秒級實時續報數據計算。計算好的數據放在 MYSQL  中,用來做大屏和 BI 看板。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先看下藍色的這條線,我們會把 Hive 中的離線數據導入到 Kudu 中,離線數據都是計算好的訂單寬表。然後會使用 Flink 作業把新增的訂單做成寬表寫入到 Kudu  中,這樣 Kudu 裏面就會有最新最全的數據。配合 4 分鐘的調度,我們就提供了分鐘級的實時續報數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在看第一條橙色的線條,這條線上有兩個 Flink 作業,一個是 ETL Job,一個是 Update Job。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ETL job 會負責靜態維度的拼接與續報狀態的計算,靜態維度拼接我們是直接訪問  MySQL,然後緩存在 JVM 中。續報狀態的計算需要依賴歷史數據,ETL Job 會將所有的訂單數據加載到 JVM 中,具體的實現方法是我們自定義了一個 partitioncustom 方法,對所有的歷史數據進行了分片,下游的每個 Task 緩存一個分片的數據。通過將數據加載到內存中,我們大大的加快了 Flink 實時計算的速度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ETL Job 的計算的數據,會有兩個輸出,一個是輸出到 Kudu,用來保證 Kudu  中的數據最新最全,兩個一個數據是 Kafka,Kafka 中有一個 Topic 記錄的是是當前訂單的變化導致了哪些訂單或者維度變化的信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接在 Kafka 後面的程序就是 Update Job,專門用來處理受影響的訂單或者維度,直接去修改 MySQL 中相關的統計數據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這樣我們就通過 2 個 Flink 作業實現的實時續報的計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最下面的一條線是實時維度的數據變更的處理,維度變更的數據會發送到 Kafka中,然後使用 Flink 進行處理,看看維度的變化影響了哪些數據的統計,最後將受影響的訂單發送到受影響的 Topic 中,由 Update Job 來重新計算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是我們實時續報的整體解決方案,如果有教育行業的朋友聽到這個分享,或許可以參考下。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"實時續報穩定性保障"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/62\/620be0947321085800f2e2bf5ab964f2.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們看看這個通用的解決方案上線之後有哪些保障。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一個保障是"},{"type":"text","marks":[{"type":"strong"}],"text":"異地雙活"},{"type":"text","text":",我們在阿里雲和自建機房都部署了一套續報程序,如果其中一套有異常,我們切換前端接口就可以了。如果兩個機房的程序都掛了,我們重零開始啓動程序,也只需要 10 分鐘。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二個保障是"},{"type":"text","marks":[{"type":"strong"}],"text":"作業容錯"},{"type":"text","text":",我們有兩個 Flink 作業,這兩個作業隨停隨啓,不影響數據的準確性。另外一點就是我們緩存了所有訂單數據在 JVM 中,如果數據量暴漲,我們只需要改變 ETL 程序的並行度就可以,不用擔心 JVM 內存溢出。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三個保障是"},{"type":"text","marks":[{"type":"strong"}],"text":"作業監控"},{"type":"text","text":",我們支持作業的異常告警和失敗後的自動拉起,也支持消費數據延遲告警。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過以上保障措施,實時續報程序經過了幾次續報週期,都比較平穩,讓人很省心。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"4.展望與規劃"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述內容詳細介紹了好未來當前業務及技術方案,總結而言我們通過多租戶實現各事業部資源隔離、通過批流融合的架構方案解決分析實時化、通過 ODS 層實時化解決數據源到 OLAP  的數據集成問題、通過 Flink SQL 封裝降低實時數據開發門檻、通過模板任務提供通用場景解決方案、通過混合雲部署方案解決資源的彈性擴容、通過實時續報解決方案覆蓋相同場景的數據分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/78\/78e4c64d2e2f7eb7602f2675d469999c.webp","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,來看一下我們展望和規劃。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下來我們要繼續深化批流融合,強化混合雲部署,提高數據分析的時效性和穩定性。支持算法平臺的實時化,數據應用的實時化,提高數據決策的時效性。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章