Hive基本SQL操作
1、Hive DDL(數據庫定義語言)
1.1、數據庫的基本操作
--展示所有數據庫
show databases;
--切換數據庫
use database_name;
/*創建數據庫
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
[COMMENT database_comment]
[LOCATION hdfs_path]
[WITH DBPROPERTIES (property_name=property_value, ...)];
*/
create database test;
/*
刪除數據庫
DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
*/
drop database database_name;
注意:當進入hive的命令行開始編寫SQL語句的時候,如果沒有任何相關的數據庫操作,那麼默認情況下,所有的表存在於default數據庫,在hdfs上的展示形式是將此數據庫的表保存在hive的默認路徑下,如果創建了數據庫,那麼會在hive的默認路徑下生成一個database_name.db的文件夾,此數據庫的所有表會保存在database_name.db的目錄下。
1.2、數據庫表的基本操作
/*
創建表的操作
基本語法:
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)
[(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)
[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];
複雜數據類型
data_type
: primitive_type
| array_type
| map_type
| struct_type
| union_type -- (Note: Available in Hive 0.7.0 and later)
基本數據類型
primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
| STRING
| BINARY -- (Note: Available in Hive 0.8.0 and later)
| TIMESTAMP -- (Note: Available in Hive 0.8.0 and later)
| DECIMAL -- (Note: Available in Hive 0.11.0 and later)
| DECIMAL(precision, scale) -- (Note: Available in Hive 0.13.0 and later)
| DATE -- (Note: Available in Hive 0.12.0 and later)
| VARCHAR -- (Note: Available in Hive 0.12.0 and later)
| CHAR -- (Note: Available in Hive 0.13.0 and later)
array_type
: ARRAY < data_type >
map_type
: MAP < primitive_type, data_type >
struct_type
: STRUCT < col_name : data_type [COMMENT col_comment], ...>
union_type
: UNIONTYPE < data_type, data_type, ... > -- (Note: Available in Hive 0.7.0 and later)
行格式規範
row_format
: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
[NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)
| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]
文件基本類型
file_format:
: SEQUENCEFILE
| TEXTFILE -- (Default, depending on hive.default.fileformat configuration)
| RCFILE -- (Note: Available in Hive 0.6.0 and later)
| ORC -- (Note: Available in Hive 0.11.0 and later)
| PARQUET -- (Note: Available in Hive 0.13.0 and later)
| AVRO -- (Note: Available in Hive 0.14.0 and later)
| JSONFILE -- (Note: Available in Hive 4.0.0 and later)
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
表約束
constraint_specification:
: [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
[, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE
*/
1.2.1、創建普通hive表(不包含行定義格式)
create table psn
(
id int,
name string,
likes array<string>,
address map<string,string>
)
1.2.2、創建自定義行格式的hive表
create table psn2
(
id int,
name string,
likes array<string>,
address map<string,string>
)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';
1.2.3、創建默認分隔符的hive表( ^A、 ^B、 ^C)
create table psn3
(
id int,
name string,
likes array<string>,
address map<string,string>
)
row format delimited
fields terminated by '\001'
collection items terminated by '\002'
map keys terminated by '\003';
1.2.4、創建hive的外部表(需要添加external和location的關鍵字)
在之前創建的表都屬於hive的內部表(psn,psn2,psn3),而psn4屬於hive的外部表
create external table psn4
(
id int,
name string,
likes array<string>,
address map<string,string>
)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':'
location '/data';
1.2.5、內部表跟外部表的區別:
1、hive內部表創建的時候數據存儲在hive的默認存儲目錄中,外部表在創建的時候需要指定額外的目錄
2、hive內部表刪除的時候,會將元數據和數據都刪除,而外部表只會刪除元數據,不會刪除數據
1.2.6、內外部表應用場景
類型 | 適用場景 |
---|---|
內部表 | 需要先創建表,然後向表中添加數據,適合做中間表的存儲 |
外部表 | 可以先創建表,再添加數據,也可以先有數據,再創建表。 本質上是將hdfs的某一個目錄的數據跟hive的表關聯映射起來, 因此適合原始數據的存儲,不會因爲誤操作將數據給刪除掉 |
1.2.7、創建單分區表
hive的分區表:
hive默認將表的數據保存在某一個hdfs的存儲目錄下,當需要檢索符合條件的某一部分數據的時候,需要全量遍歷數據,io量比較大,效率比較低。
因此可以採用分而治之的思想,將符合某些條件的數據放置在某一個目錄,此時檢索的時候只需要搜索指定目錄即可,不需要全量遍歷數據。
create table psn5
(
id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by(gender string)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';
1.2.8、創建多分區表
create table psn6
(
id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by(gender string,age int)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';
注意:
- 當創建完分區表之後,在保存數據的時候,會在hdfs目錄中看到分區列會成爲一個目錄,以多級目錄的形式 存在
- 當創建多分區表之後,插入數據的時候不可以只添加一個分區列,需要將所有的分區列都添加值
- 多分區表在添加分區列的值得時候,與順序無關,與分區表的分區列的名稱相關,按照名稱就行匹配
1.2.9、給分區表添加分區列的值
alter table table_name add partition(col_name=col_value)
1.2.10、刪除分區列的值
alter table table_name drop partition(col_name=col_value)
注意:
- 添加分區列的值的時候,如果定義的是多分區表,那麼必須給所有的分區列都賦值
- 刪除分區列的值的時候,無論是單分區表還是多分區表,都可以將指定的分區進行刪除
1.2.11、修復分區:
在使用hive外部表的時候,可以先將數據上傳到hdfs的某一個目錄中,然後再創建外部表建立映射關係,如果在上傳數據的時候,參考分區表的形式也創建了多級目錄,那麼此時創建完表之後,是查詢不到數據的,原因是分區的元數據沒有保存在mysql中,因此需要修復分區,將元數據同步更新到mysql中,此時纔可以查詢到元數據。具體操作如下:
- 在hdfs創建目錄並上傳文件
hdfs dfs -mkdir /test
hdfs dfs -mkdir /test/age=10
hdfs dfs -mkdir /test/age=20
hdfs dfs -put /root/data/data /test/age=10
hdfs dfs -put /root/data/data /test/age=20
- 創建外部表
create external table psn7
(
id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by(age int)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':'
location '/test';
- 查詢結果(沒有數據)
select * from psn7;
- 修復分區
msck repair table psn7;
- 查詢結果(有數據)
select * from psn7;
問題
以上面的方式創建hive的分區表會存在問題,每次插入的數據都是人爲指定分區列的值,我們更加希望能夠根據記錄中的某一個字段來判斷將數據插入到哪一個分區目錄下,此時利用我們上面的分區方式是無法完成操作的,需要使用動態分區來完成相關操作,後續講解。
2、Hive DML
2.1、插入數據
2.1.1、Loading files into tables
記載數據文件到某一張表中
語法:
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename
[PARTITION (partcol1=val1, partcol2=val2 ...)]
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename
[PARTITION (partcol1=val1, partcol2=val2 ...)] [INPUTFORMAT 'inputformat' SERDE 'serde'] (3.0 or later)
-
加載本地數據到hive表
load data local inpath '/root/data/data' into table psn;
(/root/data/data指的是本地linux目錄)
-
加載hdfs數據文件到hive表
load data inpath '/data/data' into table psn2;
(/data/data指的是hdfs的目錄)注意:
1、load操作不會對數據做任何的轉換修改操作
2、從本地linux load數據文件是複製文件的過程
3、從hdfs load數據文件是移動文件的過程
4、load操作也支持向分區表中load數據,只不過需要添加分區列的值
5、對於數據格式,hive是讀時檢查,mysql是寫時檢查
2.1.1、Inserting data into Hive Tables from queries
從查詢語句中獲取數據插入某張表
語法:
Standard syntax:
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 FROM from_statement;
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;
Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;
Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
- 從表中查詢數據插入結果表,如下
INSERT OVERWRITE TABLE psn9 SELECT id,name FROM psn
- 從表中獲取部分列插入到新表中,如下
from psn
insert overwrite table psn9
select id,name
insert into table psn10
select id
注意:這種方式插入數據的時候需要預先創建好結果表
2.1.1、Writing data into the filesystem from queries
將查詢到的結果插入到文件系統中
語法:
Standard syntax:
INSERT OVERWRITE [LOCAL] DIRECTORY directory1
[ROW FORMAT row_format] [STORED AS file_format] (Note: Only available starting with Hive 0.11.0)
SELECT ... FROM ...
Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1
[INSERT OVERWRITE [LOCAL] DIRECTORY directory2 select_statement2] ...
row_format
: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
[NULL DEFINED AS char] (Note: Only available starting with Hive 0.13)
- 將查詢到的結果導入到hdfs文件系統中
insert overwrite directory '/result' select * from psn;
- 將查詢的結果導入到本地文件系統中
insert overwrite local directory '/result' select * from psn;
注意:路徑千萬不要填寫根目錄,會把所有的數據文件都覆蓋
2.1.1、Inserting values into tables from SQL
這種方式使用傳統關係型數據庫的方式插入數據,效率較低
語法:
Standard Syntax:
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...]
Where values_row is:
( value [, value ...] )
where a value is either null or any valid SQL literal
- 插入數據
insert into psn values(1,'zhangsan');
2.2、數據更新和刪除
在官網中我們明確看到hive中是支持Update和Delete操作的,但是實際上,是需要事務的支持的,Hive對於事務的支持有很多的限制,如下圖所示:
因此,在使用hive的過程中,我們一般不會產生刪除和更新的操作,如果你需要測試的話,參考下面如下配置:
//在hive的hive-site.xml中添加如下配置:
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.enforce.bucketing</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.txn.manager</name>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
<name>hive.compactor.initiator.on</name>
<value>true</value>
</property>
<property>
<name>hive.compactor.worker.threads</name>
<value>1</value>
</property>
//操作語句
create table test_trancaction (user_id Int,name String) clustered by (user_id) into 3 buckets stored as orc TBLPROPERTIES ('transactional'='true');
create table test_insert_test(id int,name string) row format delimited fields TERMINATED BY ',';
insert into test_trancaction select * from test_insert_test;
update test_trancaction set name='jerrick_up' where id=1;
//數據文件
1,jerrick
2,tom
3,jerry
4,lily
5,hanmei
6,limlei
7,lucky