hive DDL 筆記

爲了查看錶結構和數據方便使用客戶端服務端模式訪問hive

 

開啓hive服務端(監聽端口爲10000)

[root@master apps]# hive-1.2.1/bin/hiveserver2

開啓服務端口後光標一直閃爍,表示開啓

 

客戶端連接hive服務端並進行操作

hive當做客戶端(在另一臺機器上操作)
注: Enter username for jdbc:hive2://master.hadoop:10000:(此處輸入啓動服務端的用戶)

        Enter password for jdbc:hive2://master.hadoop:10000:(此處沒配密碼,直接回車跳過)

 

具體操作如下:

[root@slave1 apps]# cd hive-1.2.1/
[root@slave1 hive-1.2.1]# bin/beeline 
Beeline version 1.2.1 by Apache 
Hivebeeline> !connect jdbc:hive2://master.hadoop:10000
Connecting to jdbc:hive2://master.hadoop:10000
Enter username for jdbc:hive2://master.hadoop:10000: root
Enter password for jdbc:hive2://master.hadoop:10000: 
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://master.hadoop:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| mydb           |
+----------------+--+
2 rows selected (3.776 seconds)
0: jdbc:hive2://master.hadoop:10000> use mydb;
No rows affected (0.352 seconds)
0: jdbc:hive2://master.hadoop:10000> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
| t_1       |
| t_1_like  |
| t_1_son   |
| t_2       |
+-----------+--+
4 rows selected (0.236 seconds)
0: jdbc:hive2://master.hadoop:10000> 

hive DDL操作

1、創建內部表,該表的目錄有hive自動創建

create table t_1(id int, name string,password string)
row format delimited
fields terminated by ',';

具體操作如下:

0: jdbc:hive2://master.hadoop:10000> create table t_1(id int, name string,password string)
0: jdbc:hive2://master.hadoop:10000> row format delimited
0: jdbc:hive2://master.hadoop:10000> fields terminated by ',';
No rows affected (1.671 seconds)
0: jdbc:hive2://master.hadoop:10000> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
| t_1       |
+-----------+--+
1 row selected (0.104 seconds)

2、創建外部表,可以指定文件目錄

create external table t_2(id int, name string,password string)
row format delimited
fields terminated by ','
location '/aa/bb';

具體操作如下:

0: jdbc:hive2://master.hadoop:10000> create external table t_2(id int, name string,password string)
0: jdbc:hive2://master.hadoop:10000> row format delimited
0: jdbc:hive2://master.hadoop:10000> fields terminated by ','
0: jdbc:hive2://master.hadoop:10000> location '/aa/bb';
No rows affected (4.267 seconds)
0: jdbc:hive2://master.hadoop:10000> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
| t_1       |
| t_2       |
+-----------+--+
2 rows selected (0.363 seconds)
0: jdbc:hive2://master.hadoop:10000> 

內部表和外部表(external)區別:
1)
內部表的目錄有hive創建在默認的目錄下面:/user/hive/warehouse/......
外部表的目的目錄有用戶自己建表是指定:location '/位置'
2)
drop一個內部表時,表的元信息和表數據目錄都會被刪掉
drop一個外部表時,只刪除表的元信息,表的數據目錄不會刪除

外部表的意義:通常一個數據倉庫系統,數據都是有別的系統產生的,爲了方便映射,就可以在hive
中用外部表映射,並且就算hive中的表被刪除了,但是目錄還在,不會影響到繼續使用該目錄的系統。
 

3、導入數據

1)導入本地文件到hive:(該文件需位於運行hive的機器上在這裏爲master.hadoop)

load data local inpath '/root/a.txt' into table t_1;

具體操作如下:

0: jdbc:hive2://master.hadoop:10000> load data local inpath '/root/a.txt' into table t_1;
INFO  : Loading data to table mydb.t_1 from file:/root/a.txt
INFO  : Table mydb.t_1 stats: [numFiles=1, totalSize=45]
No rows affected (0.911 seconds)
0: jdbc:hive2://master.hadoop:10000> select * from t_1;
+---------+-----------+---------------+--+
| t_1.id  | t_1.name  | t_1.password  |
+---------+-----------+---------------+--+
| 1       | user1     | 123123        |
| 2       | user2     | 123123        |
| 3       | user3     | 123123        |
+---------+-----------+---------------+--+
3 rows selected (0.289 seconds)
0: jdbc:hive2://master.hadoop:10000> 

2)導入hdfs文件到hive:

load data inpath '/a.txt' into table t_1;(此操作會將hdfs下的該文件直接移走到表目錄中)

具體操作如下:

查看hdfs下的文件

[root@master ~]# hadoop fs -ls /
Found 4 items
-rw-r--r--   2 root supergroup         45 2018-07-08 02:47 /a.txt
drwxr-xr-x   - root supergroup          0 2018-07-07 16:40 /root
drwx-wx-wx   - root supergroup          0 2018-07-07 16:05 /tmp
drwxr-xr-x   - root supergroup          0 2018-07-07 16:36 /user
[root@master ~]# 

上傳a.txt 到hive

0: jdbc:hive2://master.hadoop:10000> load data inpath '/a.txt' into table t_1;
INFO  : Loading data to table mydb.t_1 from hdfs://ns/a.txt
INFO  : Table mydb.t_1 stats: [numFiles=2, totalSize=90]
No rows affected (0.986 seconds)
0: jdbc:hive2://master.hadoop:10000> select * from t_1;
+---------+-----------+---------------+--+
| t_1.id  | t_1.name  | t_1.password  |
+---------+-----------+---------------+--+
| 1       | user1     | 123123        |
| 2       | user2     | 123123        |
| 3       | user3     | 123123        |
| 1       | user1     | 123123        |
| 2       | user2     | 123123        |
| 3       | user3     | 123123        |
+---------+-----------+---------------+--+
6 rows selected (0.435 seconds)
0: jdbc:hive2://master.hadoop:10000> 

再次查看HDFS文件目錄

[root@master ~]# hadoop fs -ls /
Found 3 items
drwxr-xr-x   - root supergroup          0 2018-07-07 16:40 /root
drwx-wx-wx   - root supergroup          0 2018-07-07 16:05 /tmp
drwxr-xr-x   - root supergroup          0 2018-07-07 16:36 /user
[root@master ~]# 

3)從別的表查詢數據後插入到另一張表中(會新建)

create table t_1_son
as
select * from t_1;

4)可以根據已有的表的表結構創建一張新的表

create table t_1_like like t_1;

給其插入數據

insert into table t_1_like
select
id,name,password
from t_1;

 

4、導出數據:

 

導出到hdfs

insert overwrite directory '/root/aa/bb'
select * from t_1;

導出到本地

insert overwrite local directory '/root/aa/bb'
select * from t_1;

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章