使用sqoop将数据定时从hive表导入MySQL

使用sqoop将数据定时从hive表导入MySQL。

话不多说,线上脚本:

#!/bin/sh
#********************************************************************************
#********************************************************************************
#reload env

#当前目录路径
mysqldhome_hostname="172.28.65.133"
mysqldhome_username="root"
mysqldhome_password="xxxxx"
mysqldhome_port="3306"
mysqldhome_dbname="recom_video"

#hive表名
hive_table1=t_dwa_hot_result_ad
#mysql表名
mysql_table=t_st_video_hot_ad
echo "......mysql_table = "$mysql_table".......hive_table1 = "$hive_table1

# 取得系统当前时间
sys_time=$(date "+%Y%m%d%H%M")
echo "...sys_time = "$sys_time

# 取得输入日期
#op_day=${last_day}
#echo "op_day="$op_day
#dt_month=`date | awk '{print $2}'`  #当期日期月份
#op_day="20200611"
op_day=`date -d '-1 days' +%Y%m%d`

# 清mysql
mysql -h$mysqldhome_hostname -P$mysqldhome_port -u$mysqldhome_username -p$mysqldhome_password -D$mysqldhome_dbname -e "
DELETE FROM $mysql_table;
"
echo "delete mysql data success...."

touch ${mysql_table}_temp_$op_day
/usr/bin/hive -e "
use recommend_video;
set hive.execution.engine=mr;
set hive.fetch.task.conversion=none;
select
  cast(video_id as BIGINT) as video_id,
  cast(watch_user as int) as watch_user,
  cast(max_watch_user as int) as max_watch_user,
  cast(coalesce(sum_finish_score ,'0.0') as decimal(15,4)) as sum_finish_score,
  cast(cnt_finish_score as int) as cnt_finish_score,
  cast(coalesce(favor_cnt,'0') as int) as favor_cnt,
  cast(coalesce(max_favor_cnt,'0') as int) as max_favor_cnt,
  cast(coalesce(hot_score,'0.0') as decimal(15,4)) as hot_score,
  create_time as shelf_time,
  from_unixtime(unix_timestamp()) as create_time,
  from_unixtime(unix_timestamp()) as update_time
from ${hive_table1} 
where dayid = "$op_day";" | grep -vi '^warn\|data' >  ${mysql_table}_temp_$op_day

echo "...select hive data success.... save in file:"${mysql_table}_temp_$op_day
echo "...dayid = "$op_day" "

/usr/bin/hadoop fs -put ${mysql_table}_temp_$op_day /tmp/
rm -rf ${mysql_table}_temp_$op_day
echo "put file to hdfs, and del file in local path..."

hdfs_temp='/tmp/'$mysql_table'_temp_'$op_day
encoding="?useUnicode=true&characterEncoding=utf-8"

echo "...sqoop script start..."
/usr/bin/sqoop export --connect jdbc:mysql://$mysqldhome_hostname:$mysqldhome_port/$mysqldhome_dbname$encoding \
--username $mysqldhome_username \
--password $mysqldhome_password \
--table $mysql_table \
--fields-terminated-by '\t' \
--input-fields-terminated-by '\t' \
--export-dir $hdfs_temp \
--columns video_id,watch_user,max_watch_user,sum_finish_score,cnt_finish_score,favor_cnt,max_favor_cnt,hot_score,shelf_time,create_time,update_time
echo "...sqoop script end..."

/usr/bin/hadoop fs -rm -r $hdfs_temp
echo "rm hdfs tmp file..."
echo "......task end......"

exit 0

 

 

注意事项:

1、请各个执行命令前使用全路径!

如,/usr/bin/hive -e,/usr/bin/hadoop fs,/usr/bin/sqoop等。

PS:该脚本是将hive数据导到本地,在导入hdfs文件上面,再通过sqoop插入MySQL。参考网上的方法,应该可以直接将hive表中的数据读取导入到mysql,有兴趣的同学可以尝试下。

附上MySQL数据表:

CREATE TABLE `t_st_video_hot_ad` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '自增ID,主键',
  `video_id` bigint(20) NOT NULL COMMENT '视频ID',
  `watch_user` bigint(15) NOT NULL COMMENT '当前视频观看次数',
  `max_watch_user` bigint(15) NOT NULL COMMENT '最大视频观看次数',
  `sum_finish_score` decimal(10,4) NOT NULL COMMENT '完播率分子/加和',
  `cnt_finish_score` bigint(15) NOT NULL COMMENT '完播率分母/次数',
  `favor_cnt` bigint(15) NOT NULL COMMENT '当前视频点赞次数',
  `max_favor_cnt` bigint(15) NOT NULL COMMENT '最大视频点赞次数',
  `hot_score` decimal(8,4) DEFAULT NULL COMMENT '热度得分',
  `shelf_time` varchar(50) NOT NULL COMMENT '上架时间',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`),
  KEY `idx_video_id` (`video_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

添加crontab定时任务脚本:

00 14 * * * sh /data/xh/test/t_st_video_hot_ad.sh >> /data/xh/test/video_hot_log.log 2>&1

 

贴一下调测过程中遇到的错误:

1、JAVA_HOME环境变量找不到:

解决办法:将变量重新在/etc/hadoop/conf/hadoop-env.sh中声明一下。

如:export JAVA_HOME=/data/jdk1.8.0_171

 

2、Export job failed。。。被这个问题困了好久。

其实通过查看Hadoop的yarn日志,可以很容易定位到错误原因。

 

通过查看中间导出的数据发现,多了一行字符串文字:

解决办法:执行hive,hdfs等命令请使用全路径!上面已提到!

 

完结!!!

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章