新聞中心
這篇文章跟大家分析一下“Spark SQL的代碼示例分析”。內(nèi)容詳細(xì)易懂,對(duì)“Spark SQL的代碼示例分析”感興趣的朋友可以跟著小編的思路慢慢深入來(lái)閱讀一下,希望閱讀后能夠?qū)Υ蠹矣兴鶐椭O旅娓【幰黄鹕钊雽W(xué)習(xí)“Spark SQL的代碼示例分析”的知識(shí)吧。
萊西網(wǎng)站建設(shè)公司創(chuàng)新互聯(lián),萊西網(wǎng)站設(shè)計(jì)制作,有大型網(wǎng)站制作公司豐富經(jīng)驗(yàn)。已為萊西數(shù)千家提供企業(yè)網(wǎng)站建設(shè)服務(wù)。企業(yè)網(wǎng)站搭建\外貿(mào)網(wǎng)站建設(shè)要多少錢,請(qǐng)找那個(gè)售后服務(wù)好的萊西做網(wǎng)站的公司定做!
參考官網(wǎng)Spark SQL的例子,自己寫了一個(gè)腳本:
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)
結(jié)果執(zhí)行出錯(cuò):
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 50.0 failed 1 times, most recent failure: Lost task 1.0 in stage 50.0 (TID 73, localhost): java.lang.ArrayIndexOutOfBoundsException: 5 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(:30) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply( :30) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1319) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
從日志可以看出,是數(shù)組越界了。
用命令
sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).foreach(x => println(x.size))
發(fā)現(xiàn)有一行記錄split出來(lái)的大小是“5”
6 6 6 6 6 6 6 6 6 6 15/05/21 20:47:37 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1774 bytes result sent to driver 6 6 6 6 6 6 5 6 15/05/21 20:47:37 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 1774 bytes result sent to driver
原因是這行記錄有空值“44671799^2015-03-27 20:56:05^2^117.93.193.238^0^^”
網(wǎng)上找到了解決辦法——使用split(str,int)函數(shù)。修改后代碼:
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^", -1)).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)
關(guān)于Spark SQL的代碼示例分析就分享到這里啦,希望上述內(nèi)容能夠讓大家有所提升。如果想要學(xué)習(xí)更多知識(shí),請(qǐng)大家多多留意小編的更新。謝謝大家關(guān)注一下創(chuàng)新互聯(lián)網(wǎng)站!
網(wǎng)頁(yè)標(biāo)題:SparkSQL的代碼示例分析
分享網(wǎng)址:http://ef60e0e.cn/article/gdgsog.html