本文基於Spark架構
目錄
引入
早期大數據開發者都是從Web轉型而來 SQL又是Web開發者必備技能
Spark SQL提供了Data Frame 以簡化RDD開發
定義
Data Frame = 以RDD爲基礎的分佈式數據集
Data Frame = RDD + Scheme
特點
整合RDD和SQL
cat /opt/services/spark/examples/src/main/resources/people.txt
# Michael, 29
# Andy, 30
# Justin, 19
/opt/services/spark/bin/spark-shell
case class People(name: String, age: Long)
val rdd = sc.textFile("/opt/services/spark/examples/src/main/resources/people.txt")
val mapRDD = rdd.map(_.split(",")).map(attributes => People(attributes(0), attributes(1).trim.toInt))
val filterRDD = mapRDD.filter(_.age > 20)
filterRDD.foreach(p => println(s"${p.name} ${p.age}"))
關於Scala字符串插值 可以參考Scala字符串插值
case class People(name: String, age: Long)
val rdd = sc.textFile("/opt/services/spark/examples/src/main/resources/people.txt")
# import spark.implicits._
val df = rdd.map(_.split(",")).map(attributes => People(attributes(0), attributes(1).trim.toInt)).toDF()
df.createOrReplaceTempView("people")
spark.sql("SELECT * FROM people WHERE age > 20").show()
+-------+---+
| name|age|
+-------+---+
|Michael| 29|
| Andy| 30|
+-------+---
統一數據訪問
cat /opt/services/spark/examples/src/main/resources/people.json
# {"name":"Michael"}
# {"name":"Andy", "age":30}
# {"name":"Justin", "age":19}
/opt/services/spark/bin/spark-shell
val df = spark.read.json("/opt/services/spark/examples/src/main/resources/people.json")
df.createOrReplaceTempView("people")
spark.sql("SELECT * FROM people WHERE age > 20").show()
+---+----+
|age|name|
+---+----+
| 30|Andy|
+---+----+
標準數據連接
docker run --name spark-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7.17
docker exec -it spark-mysql /bin/bash
mysql -uroot -p123456
CREATE DATABASE IF NOT EXISTS db_spark DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
USE db_spark;
CREATE TABLE users ( \
id int(10) unsigned NOT NULL AUTO_INCREMENT, \
name varchar(20) DEFAULT NULL COMMENT '用戶名', \
PRIMARY KEY (`id`) \
);
INSERT INTO users VALUES (1, 'XiaoWang');
INSERT INTO users VALUES (2, 'XiaoMing');
# cd /opt/services
# wget https://mirror.tuna.tsinghua.edu.cn/mysql/downloads/Connector-J/mysql-connector-java-5.1.49.tar.gz
# tar xf mysql-connector-java-5.1.49.tar.gz
/opt/services/spark/bin/spark-shell --jars /opt/services/mysql-connector-java-5.1.49/mysql-connector-java-5.1.49-bin.jar
# :paste
val df = spark.read.format("jdbc")
.option("url", "jdbc:mysql://localhost:3306/db_spark")
.option("driver", "com.mysql.jdbc.Driver")
.option("user", "root")
.option("password", "123456")
.option("dbtable", "users")
.load()
# Ctrl + D
df.createOrReplaceTempView("users")
val sqlDF = spark.sql("SELECT * FROM users WHERE name = 'XiaoMing'")
sqlDF.show()
+---+--------+
| id| name|
+---+--------+
| 2|XiaoMing|
+---+--------+