hive數據處理及hdfs文件操作

寫在前面:

本想使用hive調用python腳本實現統計分析movielens數據,但是最後一步調用腳本的地方不成功沒找到問題所在,於是將過程中的一些經驗寫出來,非常詳盡,對新手來說應該挺有用的。

另外調用腳本的程序和報錯我會貼出來,應該是腳本寫的有問題,後面找到問題或者有人告訴我我會更新。


還拿hive與movie lens數據說事兒。

1、首先進入hive數據庫創建基表
這裏寫圖片描述
2、在linxu文件工作文件夾下下載數據資源並且解壓,我的目錄是opt/jskp/jinjiwei

wget http://files.grouplens.org/datasets/movielens/ml-100k.zip

這裏寫圖片描述
這裏寫圖片描述
3、在hdfs上新建自己的工作文件夾,我的是hdfs dfs -mkdir 文件名(JJW)
這裏寫圖片描述
4、將本地解壓的文件上傳到hdfs:

hdfs dfs -put /opt/jskp/jinjiwei/ml-100k /JJW(hdfs目錄)

在hdfs上面查看上傳結果:
這裏寫圖片描述
5、將ml-100k文件下的u.data文件加載到hive數據庫前面建的基表JJW中:
這裏寫圖片描述
可以看到我第一次加載文件路徑是本地路徑是錯誤的,第二次是hdfs上面路徑,結果正確,下面驗證加載結果:
這裏寫圖片描述
可以在hive中及進行一些簡單的統計如:
這裏寫圖片描述
6、創建子表JJW_new,用於把基表JJW數據導入子表(因爲我調用python腳本不成功,這裏就直接導入了)

CREATE TABLE JJW_new (
  userid INT,
  movieid INT,
  rating INT,
  weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';

這裏寫圖片描述
7、編寫python腳本,功能僅僅將unix時間改爲正常時間戳:

import sys
import datetime

for line in sys.stdin:
  line = line.strip()
  userid, movieid, rating, unixtime = line.split('\t')
  weekday=datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
  print '\t'.join([userid, movieid, rating, str(weekday)])

8、添加本地python腳本如下圖,路徑爲本地絕對路徑:
這裏寫圖片描述
9、最終不調用python腳本方法:

INSERT OVERWRITE TABLE JJW_new
SELECT
  userid, movieid, ratingid, unixtime
FROM JJW;

驗證:
這裏寫圖片描述
10、引用腳本方法:

INSERT OVERWRITE TABLE JJW_new
SELECT
  TRANSFORM (userid, movieid, ratingid, unixtime)
  USING 'python weekday_mapper.py'
  AS (userid, movieid, rating, weekday)
FROM JJW;

報錯:

hive> INSERT OVERWRITE TABLE JJW_new
    > SELECT
    >   TRANSFORM (userid, movieid, ratingid, unixtime)
    >   USING 'python tansform.py'
    >   AS (userid, movieid, rating, weekday) 
    > FROM JJW;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1526968712310_2578, Tracking URL = http://hm:8088/proxy/application_1526968712310_2578/
Kill Command = /opt/software/hadoop/hadoop-2.6.4/bin/hadoop job  -kill job_1526968712310_2578
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2018-06-28 13:00:12,907 Stage-1 map = 0%,  reduce = 0%
2018-06-28 13:00:42,417 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_1526968712310_2578 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1526968712310_2578_m_000000 (and more) from job job_1526968712310_2578

Task with the most failures(4): 
-----
Task ID:
  task_1526968712310_2578_m_000000

URL:
  http://hm:8088/taskdetails.jsp?jobid=job_1526968712310_2578&tipid=task_1526968712310_2578_m_000000
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"userid":47,"movieid":324,"ratingid":3,"unixtime":"879439078"}
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:195)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"userid":47,"movieid":324,"ratingid":3,"unixtime":"879439078"}
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
    at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
    ... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20001]: An error occurred while reading or writing to your custom script. It may have crashed with an error.
    at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:410)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
    ... 9 more
Caused by: java.io.IOException: Stream closed
    at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:433)
    at java.io.OutputStream.write(OutputStream.java:116)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.hadoop.hive.ql.exec.TextRecordWriter.write(TextRecordWriter.java:53)
    at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:378)
    ... 15 more


FAILED: Execution Error, return code 20001 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. An error occurred while reading or writing to your custom script. It may have crashed with an error.
MapReduce Jobs Launched: 
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

11、使用到的命令總結一波:

本地數據加載到hive表(可覆蓋表內容)

LOAD DATA LOCAL INPATH '/opt/jskp/jinjiwei/ml-100k/u.data
OVERWRITE INTO TABLE jjw; 

本地數據加載到hive表(不可覆蓋表內容)

LOAD DATA INPATH '/opt/jskp/jinjiwei/ml-100k/u.data' INTO table testkv;

本地數據上傳到hdfs

hdfs dfs -put /opt/jskp/jinjiwei/ml-100k.zip /JJW

修改hdfs的一個文件:

獲取
hdfs dfs -get JJW/transform.py
修改
vi transform.py
上傳
hdfs dfs -put -f test.txt yourHdfsPath/test.txt

基本hadoop dfs與hdfs dfs可互換,後面跟的參數大多爲linux命令,如hdfs dfs -ls , hdfs dfs -mkdir等。
另外hive與linux交互:可在hive環境下用!linux命令!find等。
其他一些命令可以參考這裏(不是我寫的哦):HDFS 常用文件操作命令

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章