Java自学者论坛

 找回密码
 立即注册

手机号码,快捷登录

恭喜Java自学者论坛(https://www.javazxz.com)已经为数万Java学习者服务超过8年了!积累会员资料超过10000G+
成为本站VIP会员,下载本站10000G+会员资源,会员资料板块,购买链接:点击进入购买VIP会员

JAVA高级面试进阶训练营视频教程

Java架构师系统进阶VIP课程

分布式高可用全栈开发微服务教程Go语言视频零基础入门到精通Java架构师3期(课件+源码)
Java开发全终端实战租房项目视频教程SpringBoot2.X入门到高级使用教程大数据培训第六期全套视频教程深度学习(CNN RNN GAN)算法原理Java亿级流量电商系统视频教程
互联网架构师视频教程年薪50万Spark2.0从入门到精通年薪50万!人工智能学习路线教程年薪50万大数据入门到精通学习路线年薪50万机器学习入门到精通教程
仿小米商城类app和小程序视频教程深度学习数据分析基础到实战最新黑马javaEE2.1就业课程从 0到JVM实战高手教程MySQL入门到精通教程
查看: 982|回复: 0

[Spark] DataFram读取JSON文件异常 出现 Since Spark 2.3, the queries from raw JSON/CSV files are disallowed...

[复制链接]
  • TA的每日心情
    奋斗
    2024-4-6 11:05
  • 签到天数: 748 天

    [LV.9]以坛为家II

    2034

    主题

    2092

    帖子

    70万

    积分

    管理员

    Rank: 9Rank: 9Rank: 9

    积分
    705612
    发表于 2021-5-31 20:49:38 | 显示全部楼层 |阅读模式

     

    在IDEA中运行Scala脚本访问执行SparkSQL时:

    df.show()

     

    出现报错信息:

     1 19/12/06 14:26:17 INFO SparkContext: Created broadcast 2 from show at Student.scala:16  2 Exception in thread "main" org.apache.spark.sql.AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the 3 referenced columns only include the internal corrupt record column 4 (named _corrupt_record by default). For example:  5 spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()  6 and spark.read.schema(schema).json(file).select("_corrupt_record").show().  7 Instead, you can cache or save the parsed results and then send the same query.  8 For example, val df = spark.read.schema(schema).json(file).cache() and then  9 df.filter($"_corrupt_record".isNotNull).count().; 10 at org.apache.spark.sql.execution.datasources.json.JsonFileFormat.buildReader(JsonFileFormat.scala:120) 11 at org.apache.spark.sql.execution.datasources.FileFormat$class.buildReaderWithPartitionValues(FileFormat.scala:129) 12 at org.apache.spark.sql.execution.datasources.TextBasedFileFormat.buildReaderWithPartitionValues(FileFormat.scala:165) 13 at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:309) 14 at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:305) 15 at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:327) 16 at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627) 17 at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) 18 at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) 19 at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) 20 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 21 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 22 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 23 at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247) 24 at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:339) 25 at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) 26 at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389) 27 at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550) 28 at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550) 29 at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) 30 at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) 31 at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) 32 at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) 33 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) 34 at org.apache.spark.sql.Dataset.head(Dataset.scala:2550) 35 at org.apache.spark.sql.Dataset.take(Dataset.scala:2764) 36 at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254) 37 at org.apache.spark.sql.Dataset.showString(Dataset.scala:291) 38 at org.apache.spark.sql.Dataset.show(Dataset.scala:751) 39 at org.apache.spark.sql.Dataset.show(Dataset.scala:710) 40 at org.apache.spark.sql.Dataset.show(Dataset.scala:719) 41 at Student$.main(Student.scala:16) 42 at Student.main(Student.scala)

     

    因为我的JSON格式是多行的,只需要改为一行即可

    { "name": "Michael", "age": 12 } { "name": "Andy", "age": 13 } { "name": "Justin", "age": 8 }

     

    修改为:

    {"name": "Michael", "age": 12} {"name": "Andy", "age": 13} {"name": "Justin", "age": 8}

     

    哎...今天够累的,签到来了1...
    回复

    使用道具 举报

    您需要登录后才可以回帖 登录 | 立即注册

    本版积分规则

    QQ|手机版|小黑屋|Java自学者论坛 ( 声明:本站文章及资料整理自互联网,用于Java自学者交流学习使用,对资料版权不负任何法律责任,若有侵权请及时联系客服屏蔽删除 )

    GMT+8, 2024-4-30 19:55 , Processed in 0.067524 second(s), 29 queries .

    Powered by Discuz! X3.4

    Copyright © 2001-2021, Tencent Cloud.

    快速回复 返回顶部 返回列表