英雄惜英雄-當Spark遇上Zeppelin之實戰案例

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們在之前的文章中提到過","attrs":{}},{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s?__biz=MzU3MzgwNTU2Mg==&mid=2247495608&idx=1&sn=dfb63a8e4af3820746a7903664b83f58&chksm=fd3ea92dca49203bb4461d89a3fbf2c6007888c47ef8fbb3c8b2640eec7b45f9f6b7d71d9967&token=808170609&lang=zh_CN#rd","title":""},"content":[{"type":"text","text":"《大數據可視化從未如此簡單 - Apache Zepplien全面介紹》","attrs":{}}]},{"type":"text","text":"一文中介紹了 Zeppelin 的主要功能和特點,並且最後還用一個案例介紹了這個框架的使用。這節課我們用兩個直觀的小案例來介紹 Zepplin 和 Spark 如何配合使用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"到目前爲止,Apache Spark 已經支持三種集羣管理器類型(Standalone,Apache Mesos 和 Hadoop YARN )。本文中我們根據官網文檔使用 Docker 腳本構建一個Spark standalone mode ( Spark獨立模式 )的環境來使用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Spark獨立模式環境搭建","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Spark standalone 是Spark附帶的簡單集羣管理器,可以輕鬆設置集羣。您可以通過以下步驟簡單地設置 Spark獨立環境。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"注意","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由於 Apache Zeppelin 和 Spark 爲其 Web UI 使用相同的 8080 端口,因此您可能需要在 conf / zeppelin-site.xml 中更改 zeppelin.server.port 。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" 1. 構建 Docker 文件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"您可以在腳本 / docker / spark-cluster-managers 下找到 docker 腳本文件。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"cd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_standalone\ndocker build -t \"spark_standalone\" .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. 啓動Docker","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"docker run -it \\\n-p 8080:8080 \\\n-p 7077:7077 \\\n-p 8888:8888 \\\n-p 8081:8081 \\\n-h sparkmaster \\\n--name spark_standalone \\\nspark_standalone bash;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在這裏運行 docker 容器的 sparkmaster 主機名應該在 /etc/hosts 中綁定映射關係。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. 在Zeppelin中配置Spark解釋器","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"將 Spark master 設置爲 spark://< hostname >:7077 在 Zeppelin 的解釋器設置頁面上。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/2a/2a14feb46a0af6228878a72f6e64b7b5.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"4. 用Spark解釋器運行Zeppelin","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Zeppelin 中運行帶有 Spark 解釋器的單個段落後,瀏覽 https://< hostname>:8080,並檢查 Spark 集羣是否運行正常。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/41/417440ad06f4bcd753bef18dae731b7e.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後我們可以用以下命令簡單地驗證 Spark 在 Docker 中是否運行良好。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"ps -ef | grep spark","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Spark on Zepplin讀取本地文件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假設我們本地有一個名爲bank.csv的文件,樣例數據如下:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"age:Integer, job:String, marital : String, education : String, balance : Integer\n20;teacher;single;本科;20000\n25;plumber;single;本科;10000\n21;doctor;single;本科;25000\n23;singer;single;本科;20000\n...","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先,將csv格式的數據轉換成RDD Bank對象,運行以下腳本。這也將使用filter功能過濾掉一些數據。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"val bankText = sc.textFile(\"yourPath/bank/bank-full.csv\")\ncase class Bank(age:Integer, job:String, marital : String, education : String, balance : Integer)\n\n// split each line, filter out header (starts with \"age\"), and map it into Bank case class\nval bank = bankText.map(s=>s.split(\";\")).filter(s=>s(0)!=\"\\\"age\\\"\").map(\n s=>Bank(s(0).toInt,\n s(1).replaceAll(\"\\\"\", \"\"),\n s(2).replaceAll(\"\\\"\", \"\"),\n s(3).replaceAll(\"\\\"\", \"\"),\n s(5).replaceAll(\"\\\"\", \"\").toInt\n )\n)\n// convert to DataFrame and create temporal table\nbank.toDF().registerTempTable(\"bank\")","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果想使用圖形化看到年齡分佈,可以運行如下sql:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"%sql ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"select age, count(1) from bank where age < 30 group by age order by age","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ad/ad251591f6f751418210550ed2865499.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"您可以輸入框通過更換設置年齡條件30用${maxAge=30}。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"%sql ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"select age, count(1) from bank where age < ${maxAge=30} group by age order by age","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果希望看到有一定婚姻狀況的年齡分佈,並添加組合框來選擇婚姻狀況:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"%sql ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"select age, count(1) from bank where marital=\"${marital=single,single|divorced|married}\" group by age order by age","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f9/f9f148c1af2215ba8df29abf13b751a2.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Zeppelin支持畫圖,功能簡單但強大,可同時輸出表格、柱狀圖、折線圖、餅狀圖、折線圖、點圖。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面將各年齡的用戶數用畫出來,畫圖的實現可以將結果組織成下面這種格式:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"println(“%table column_1\\tcolumn_2\\n”+value_1\\tvalue_2\\n+…)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/0e/0e6ddf8dcd5e3924d547922491e87461.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,我們甚至可以直接將運算結果存入 Mysql 中:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"%spark\ndf3.write.mode(\"overwrite\").format(\"jdbc\").option(\"driver\",\"com.mysql.jdbc.Driver\").option(\"user\",\"root\").option(\"password\",\"password\").option(\"url\",\"jdbc:mysql://localhost:3306/spark_demo\").option(\"dbtable\",\"record\").save()","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Spark on Zepplin讀取HDFS文件","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先我們需要配置HDFS文件系統解釋器,我們需要進行如下的配置。在筆記本中,要啓用HDFS解釋器,可以單擊齒輪圖標並選擇HDFS。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/11/11b526258377571d3ae33d2df23a9d12.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然後我們就可以愉快的使用Zepplin讀取HDFS文件了:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如:下面先讀取HDFS文件,該文件爲JSON文件,讀取出來之後取出第一列然後以Parquet的格式保存到HDFS上:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d9/d967dd3c96c90117a60bf843d629680c.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Spark on Zepplin讀取流數據","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以參考官網中,讀取Twitter實時流的案例:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"import org.apache.spark.streaming._\nimport org.apache.spark.streaming.twitter._\nimport org.apache.spark.storage.StorageLevel\nimport scala.io.Source\nimport scala.collection.mutable.HashMap\nimport java.io.File\nimport org.apache.log4j.Logger\nimport org.apache.log4j.Level\nimport sys.process.stringSeqToProcess\n\n/** Configures the Oauth Credentials for accessing Twitter */\ndef configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {\n val configs = new HashMap[String, String] ++= Seq(\n \"apiKey\" -> apiKey, \"apiSecret\" -> apiSecret, \"accessToken\" -> accessToken, \"accessTokenSecret\" -> accessTokenSecret)\n println(\"Configuring Twitter OAuth\")\n configs.foreach{ case(key, value) =>\n if (value.trim.isEmpty) {\n throw new Exception(\"Error setting authentication - value for \" + key + \" not set\")\n }\n val fullKey = \"twitter4j.oauth.\" + key.replace(\"api\", \"consumer\")\n System.setProperty(fullKey, value.trim)\n println(\"\\tProperty \" + fullKey + \" set as [\" + value.trim + \"]\")\n }\n println()\n}\n\n// Configure Twitter credentials\nval apiKey = \"xxxxxxxxxxxxxxxxxxxxxxxxx\"\nval apiSecret = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nval accessToken = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nval accessTokenSecret = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nconfigureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)\n\nimport org.apache.spark.streaming.twitter._\nval ssc = new StreamingContext(sc, Seconds(2))\nval tweets = TwitterUtils.createStream(ssc, None)\nval twt = tweets.window(Seconds(60))\n\ncase class Tweet(createdAt:Long, text:String)\ntwt.map(status=>\n Tweet(status.getCreatedAt().getTime()/1000, status.getText())\n).foreachRDD(rdd=>\n // Below line works only in spark 1.3.0.\n // For spark 1.1.x and spark 1.2.x,\n // use rdd.registerTempTable(\"tweets\") instead.\n rdd.toDF().registerAsTable(\"tweets\")\n)\n\ntwt.print\n\nssc.start()","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同理,Zepplin也可以讀取Kafka中的數據,註冊成表然後進行各種運算。我們參考一個Zepplin版本的WordCount實現:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"%spark\nimport _root_.kafka.serializer.DefaultDecoder\nimport _root_.kafka.serializer.StringDecoder\nimport org.apache.spark.streaming.kafka.KafkaUtils\nimport org.apache.spark.storage.StorageLevel\nimport org.apache.spark.streaming._\n \n// prevent INFO logging from pollution output\nsc.setLogLevel(\"INFO\")\n \n// creating the StreamingContext with 5 seconds interval\nval ssc = new StreamingContext(sc, Seconds(5))\n \nval kafkaConf = Map(\n \"metadata.broker.list\" -> \"localhost:9092\",\n \"zookeeper.connect\" -> \"localhost:2181\",\n \"group.id\" -> \"kafka-streaming-example\",\n \"zookeeper.connection.timeout.ms\" -> \"1000\"\n)\n \nval lines = KafkaUtils.createStream[Array[Byte], String, DefaultDecoder, StringDecoder](\n ssc,\n kafkaConf,\n Map(\"test\" -> 1), // subscripe to topic and partition 1\n StorageLevel.MEMORY_ONLY\n)\n \nval words = lines.flatMap{ case(x, y) => y.split(\" \")}\n\nimport spark.implicits._\n\nval w=words.map(x=> (x,1L)).reduceByKey(_+_)\nw.foreachRDD(rdd => rdd.toDF.registerTempTable(\"counts\"))\nssc.start()","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"從上面的temporary table counts 中查詢每小批量的數據中top 10 的單詞值。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":""},"content":[{"type":"text","text":"select * from counts order by _2 desc limit 10","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/37/3720d879c64abe2fdae0a1fae3dc2e4f.png","alt":"file","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"怎麼樣?是不是很強大?推薦大家可以自己試試看。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原文鏈接:","attrs":{}},{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s?__biz=MzU3MzgwNTU2Mg==&mid=2247497261&idx=1&sn=a7d01f2d015b3d012e97230dde3d7f7f&chksm=fd3eb0b8ca4939aeb885cd64372b3e221d49e2bb6e99b5a1ea109d0673e413114b4bfebcb4df&token=808170609&lang=zh_CN#rd","title":""},"content":[{"type":"text","text":"英雄惜英雄-當Spark遇上Zeppelin之實戰案例","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"歡迎關注,","attrs":{}},{"type":"link","attrs":{"href":"https://shimo.im/docs/jdPhrtFwVCAMkoWv","title":""},"content":[{"type":"text","text":"《大數據成神之路》","attrs":{}}]},{"type":"text","text":"系列文章","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章