1.導入carbondata依賴的jar包
將apache-carbondata-1.5.3-bin-spark2.3.2-hadoop2.7.2.jar導入$SPARKHOME/jars;或將apache-carbondata-1.5.3-bin-spark2.3.2-hadoop2.7.2.jar導入在$SPARKHOME創建的carbondlib目錄
2.導入kafka依賴的jar包
接入kafka數據需要依賴kafka的jars,將以下jars導入$SPARKHOME/jars
kafka-clients-0.10.0.1.jar
spark-sql-kafka-0-10_2.11-2.3.2.jar
3.spark-shell啓動服務
./bin/spark-shell --master spark://hostname:7077 --jars apache-carbondata-1.5.3-bin-spark2.3.2-hadoop2.7.2.jar
a).導入依賴
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
b).創建session
啓動第一個目錄是數據存儲目錄,第二個目錄是元數據目錄;都可以是hdfs目錄
val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("/home/bigdata/carbondata/data","/home/bigdata/carbondata/carbon.metastore")
c).創建source表
carbon.sql(
s"""
| CREATE TABLE IF NOT EXISTS kafka_json_source(
| id STRING,
| name STRING,
| age INT,
| brithday TIMESTAMP)
| STORED AS carbondata
| TBLPROPERTIES(
| 'streaming'='source',
| 'format'='kafka',
| 'kafka.bootstrap.servers'='hostname:9092',
| 'subscribe'='kafka_json',
| 'record_format'='json',
| 'comment'='get kafka data')
""".stripMargin).show()
d).創建sink表
carbon.sql(
s"""
| CREATE TABLE IF NOT EXISTS kafka_json_sink(
| id STRING,
| name STRING,
| age INT,
| brithday TIMESTAMP)
| STORED AS carbondata
| TBLPROPERTIES(
| 'streaming'='sink')
""".stripMargin).show()
e).創建job任務
carbon.sql(
s"""
| CREATE STREAM kafka_json_job ON TABLE kafka_json_sink(
| STMPROPERTIES(
| 'trigger'='ProcessingTime',
| 'interval'='10 seconds')
| AS SELECT * FROM kafka_json_source
""".stripMargin).show()
4.常用SQL命令
a).導入本地數據
carbon.sql("LOAD DATA INPATH '/home/bigdata/carbondata/sample.csv' INTO TABLE kafka_json_source").show()
b).查看錶結構
carbon.sql("DESC kafka_json_source").show()
c).查看錶數據
carbon.sql("SELECT * FROM kafka_json_source WHERE id=1").show()
d).清理表數據
carbon.sql("TRUNCATE TABLE kafka_json_sink").show()
e).刪除表
carbon.sql("DROP TABLE IF EXISTS kafka_json_source").show()
f).查看job任務狀態
carbon.sql("SHOW STREAMS ON TABLE kafka_json_sink").show()
g).刪除job任務
carbon.sql("DROP STREAM kafka_json_job").show()
5.注意事項
a).kafka使用配置
由於Carbondata的kafka-consumer反序列化配置如下,所以在kafka-producer應該使用對於配置,否則無法解析數據
key.deserializer = org.apache.kafka.common.serialization.ByteArrayDeserializer
value.deserializer = org.apache.kafka.common.serialization.ByteArrayDeserializer