第一步:从maven中下连接mysql的jar包
第二步:spark2-shell --jars mysql-connector-java-8.0.15.jar
第三步:
// scala 版
val df = spark.read.format("jdbc").option("url", "jdbc:mysql://rr-bp1d22ltxgwa09g44720.mysql.rds.aliyuncs.com/"+dbname+"?useUnicode=true&characterEncoding=UTF-8").option("driver", "com.mysql.jdbc.Driver").option("fetchsize", 1000).option("numPartitions", 2).option("dbtable", "(select * from " + tablename + ") as t").option("user", "用户名").option("password", "密码").load()
df.write.mode("Overwrite").saveAsTable("写入hive的表名")
如果要同步很多,将上述的代码封装成一个函数,然后做for循环就好了!!
Hive1.1版本不支持Date数据类型,所以遇到这个情况,先把Date类型转换为String类型,我这边用最笨的方法,构建HSQL来进行转换
# scala 版本
var columns=df.columns.toBuffer
val dateTypecolumns=Array("last_biz_date","final_repayment_day","principal_settled_day","value_date")
columns--=dateTypecolumns
val temp="CAST(last_biz_date AS STRING), CAST(final_repayment_day AS STRING), CAST(principal_settled_day AS STRING), CAST(value_date AS STRING)"
val temp2=temp+','+columns.mkString(",")
def get_columns(x:String)={
val df=spark.sql(s"select $x from df")
df
}
get_columns(temp2).write.mode("Overwrite").saveAsTable("hive表名")