spark sqlcontext 讀取json 文件

多行json  直接 使用 sqlcontext.read().json("path")  讀取時候 報錯如下 :

Exception in thread "main" org.apache.spark.sql.AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;
	at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.buildReader(JsonFileFormat.scala:118)
	at org.apache.spark.sql.execution.datasources.FileFormat$class.buildReaderWithPartitionValues(FileFormat.scala:129)
	at org.apache.spark.sql.execution.datasources.TextBasedFileFormat.buildReaderWithPartitionValues(FileFormat.scala:160)
	at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:294)
	at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:290)
	at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:312)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:610)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3278)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
	at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
	at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
	at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
	at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
	at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
	at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
	at App.main(App.java:16)
19/09/01 15:03:48 INFO SparkContext: Invoking stop() from shutdown hook
19/09/01 15:03:48 INFO SparkUI: Stopped Spark web UI at http://192.168.1.2:4040
19/09/01 15:03:48 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/09/01 15:03:48 INFO MemoryStore: MemoryStore cleared
19/09/01 15:03:48 INFO BlockManager: BlockManager stopped
19/09/01 15:03:48 INFO BlockManagerMaster: BlockManagerMaster stopped
19/09/01 15:03:48 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/09/01 15:03:48 INFO SparkContext: Successfully stopped SparkContext
19/09/01 15:03:48 INFO ShutdownHookManager: Shutdown hook called
19/09/01 15:03:48 INFO ShutdownHookManager: Deleting directory /private/var/folders/s1/d29h6xhj3r7dnw9d5m2d626w0000gn/T/spark-cbdf2489-115a-4e4b-93fb-4e5e06c24586

將其數據格式修改爲單行json  後 無 此問題 

樣例數據 

{
  "name": "芙蓉姐姐",
  "age": 12,
  "sex": "W"
}
{
  "name": "女媧",
  "age": 3008,
  "sex" : "W"
}

 

 

測試使用 網上的解決方案 

如 下 links

 方法一設置 option 

方法 二設置 wholeTextFiles  預讀取數據 

 

 //Dataset json = sqlContext.read().option("multiline", true).option("mode", "PERMISSIVE").json("./resources/app.json");
        Dataset<Row> json = sqlContext.read().json(sc.wholeTextFiles("./resources/app.json").values());

        json.printSchema();

但是效果不太好。

依然沒有將多行json 解析 正常  

 

最後發現   使用設置 或者 while TextFile  對文件格式要求較高 。

需要是標準的json 格式 。從新檢查數據後 可以運行

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章