sqoop導入錯誤:Hive does not support the SQL type for column col_name

問題:
把mysql導入到hive時,由於hive的類型不支持mysql的類型,出現導入錯誤,如下:


[hdfs@hadoop0 ~]$ sqoop import --connect jdbc:mysql://10.1.32.34:3306/dicts --username sqoop --password sqoop  -m 1 --table ua --hive-import --hive-overwrite --hive-table ua
Warning: /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/06/08 11:05:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0
......
16/06/08 11:05:47 INFO mapreduce.ImportJobBase: Transferred 1.0704 MB in 18.1975 seconds (60.2308 KB/sec)
16/06/08 11:05:47 INFO mapreduce.ImportJobBase: Retrieved 1245 records.
16/06/08 11:05:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `ua` AS t LIMIT 1
16/06/08 11:05:47 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive does not support the SQL type for column v
        at org.apache.sqoop.hive.TableDefWriter.getCreateTableStmt(TableDefWriter.java:181)
        at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:189)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:514)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:236)




解決:
--mysql表類型
mysql> desc ua1;
+---------+-------------+------+-----+-------------------+----------------+
| Field   | Type        | Null | Key | Default           | Extra          |
+---------+-------------+------+-----+-------------------+----------------+
| id      | int(11)     | NO   | PRI | NULL              | auto_increment | 
| k       | varchar(32) | YES  | UNI | NULL              |                | 
| v       | mediumblob  | YES  |     | NULL              |                | 
| updated | timestamp   | NO   | MUL | CURRENT_TIMESTAMP |                | 
+---------+-------------+------+-----+-------------------+----------------+
4 rows in set (0.00 sec)
mysql> truncate table nodist1;
Query OK, 0 rows affected (0.01 sec)


mysql> select * from ua limit 1\G;
*************************** 1. row ***************************
     id: 1
      k: 0000000000007668
      v: {"old_model": "dopod p660_cmcc", "photo_fmt": "JPEG", "midi": "64", "audio_fmt": "MP3\u3001WMA\u3001WAV", "video_fmt": "WMV\u3001AVI\u30013GP(H.263)", "price": "1500-2499", "screen": "240*320", "model": "dopod P660", "wap": "wap 2.0", "os": "Windows", "brand": "\u591a\u666e\u8fbe", "midp": "MIDP 2.0", "manufacturer": "\u6b66\u6c49\u591a\u666e\u8fbe\u901a\u8baf\u6709\u9650\u516c\u53f8"}
updated: 2009-12-25 10:42:02
1 row in set (0.00 sec)


ERROR: 
No query specified


把mysql的指定字段的類型轉換成hive string類型。
使用參數--map-column-hive v=string 進行類型轉換:
[hdfs@hadoop0 ~]$ sqoop import --connect jdbc:mysql://10.1.32.34:3306/dicts --username sqoop --password sqoop  -m 1 --table ua --hive-import --hive-overwrite --hive-table ua --map-column-hive v=string


導入hive後查看內容:




參考:
http://sdhsdhsdhsdh.iteye.com/blog/1944095
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章