【大數據入門實踐】Hive DML 創建表,導入倒出數據

1:倒入數據到hive

hive> show databases;
OK
default
zhdc
Time taken: 0.833 seconds, Fetched: 2 row(s)
hive> use zhdc;
OK
Time taken: 0.049 seconds
hive> show tables;
OK
emp
Time taken: 0.071 seconds, Fetched: 1 row(s)
hive> LOAD DATA LOCAL INPATH '/home/hadoop/data/emp.txt' OVERWRITE INTO TABLE emp ;
Loading data to table zhdc.emp
Table zhdc.emp stats: [numFiles=1, numRows=0, totalSize=700, rawDataSize=0]
OK
Time taken: 1.852 seconds
hive> select * from emp;
OK
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
8888	HIVE	PROGRAM	7839	1988-1-23	10300.0	NULL	NULL
Time taken: 0.456 seconds, Fetched: 15 row(s)
hive> 
1:創建一張新表
create table dept(
deptno int,
dname string,
location string
) row format delimited fields terminated by '\t';

2:覆蓋導入
LOAD DATA LOCAL INPATH '/home/hadoop/data/dept.txt' OVERWRITE INTO TABLE dept;

3:追加導入
LOAD DATA LOCAL INPATH '/home/hadoop/data/dept.txt' INTO TABLE dept;
1:hive導出數據
[hadoop@localhost ~]$ mkdir -p /home/hadoop/tmp/d6/emptmp

INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/tmp/d6/emptmp'
row format delimited fields terminated by ','
SELECT empno,ename FROM emp;

hive> INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/tmp/d6/emptmp'
    > row format delimited fields terminated by ','
    > SELECT empno,ename FROM emp;
Query ID = hadoop_20190317085858_a09b8b20-f30c-4a18-9b57-21d500fe330d
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1552784260685_0001, Tracking URL = http://localhost:8088/proxy/application_1552784260685_0001/
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1552784260685_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-03-17 09:46:55,963 Stage-1 map = 0%,  reduce = 0%
2019-03-17 09:47:01,517 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.91 sec
MapReduce Total cumulative CPU time: 910 msec
Ended Job = job_1552784260685_0001
Copying data to local directory /home/hadoop/tmp/d6/emptmp
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 0.91 sec   HDFS Read: 3613 HDFS Write: 164 SUCCESS
Total MapReduce CPU Time Spent: 910 msec
OK
Time taken: 18.479 seconds

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章