ruby 通過hive連接Hadoop

連接方式:rbhive

使用rbhive需要先安裝gem,可以在rubygems.org網站下載安裝,源代碼地址:https://github.com/forward/rbhive,更多詳見此網址介紹

作者對rbhive的一個總體解釋爲:A simple library to execute Hive queries against the Hive thrift server.

此gem提供了很多公共方法來操作hive  藉以實現檢索Hadoop數據的目的,常用的主要包括以下幾種

fetch results(檢索結果)

require 'rubygems'
require 'rbhive'

RBHive.connect('hive.server.address') do |connection|
  connection.fetch 'SELECT city, country FROM cities'
end
➔ [{:city => "London", :country => "UK"}, {:city => "Mumbai", :country => "India"}, {:city => "New York", :country => "USA"}]
運行sql語句後以一個數組的形式返回,而且數據元素是用的hash,hash的key爲你檢索的數據表的列名字

execute sql(執行sql語句)

require 'rubygems'
require 'rbhive'

RBHive.connect('hive.server.address') do |connection|
  connection.execute 'DROP TABLE cities'
end
➔ nil
不返回結果

how to create and/or drop tables

require 'rubygems'
require 'rbhive'

table = TableSchema.new('person', 'List of people that owe me money') do
  column 'name', :string, 'Full name of debtor'
  column 'address', :string, 'Address of debtor'
  column 'amount', :float, 'The amount of money borrowed'

  partition 'dated', :string, 'The date money was given'
  partition 'country', :string, 'The country the person resides in'
end

RBHive.connect('hive.server.address') do |connection|
  connection.create_table(table)
  connection.drop_table(table)
end


how to modify table schema

require 'rubygems'
require 'rbhive'

table = TableSchema.new('person', 'List of people that owe me money') do
  column 'name', :string, 'Full name of debtor'
  column 'address', :string, 'Address of debtor'
  column 'amount', :float, 'The amount of money borrowed'
  column 'new_amount', :float, 'The new amount this person somehow convinced me to give them'

  partition 'dated', :string, 'The date money was given'
  partition 'country', :string, 'The country the person resides in'
end

RBHive.connect('hive.server.address') do |connection|
  connection.replace_columns(table)
end

hive的基本用法(轉載)

創建表

hive> CREATE TABLE pokes (foo INT, bar STRING);

創建表並創建索引字段ds

hive> CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING);

顯示所有表

hive> SHOW TABLES;

按正條件(正則表達式)顯示錶

hive> SHOW TABLES '.*s';

表添加一列

hive> ALTER TABLE pokes ADD COLUMNS (new_col INT);

添加一列並增加列字段註釋

hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');

更改表名

hive> ALTER TABLE events RENAME TO 3koobecaf;

刪除表

hive> DROP TABLE pokes;

元數據存儲

將文件中的數據加載到表中

hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;

加載本地數據,同時給定分區信息

hive> LOAD DATA LOCAL INPATH './examples/files/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15');

加載DFS數據 ,同時給定分區信息

hive> LOAD DATA INPATH '/user/myname/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15');
The above command will load data from an HDFS file/directory to the table. Note that loading data from HDFS will result in moving the file/directory. As a result, the operation is almost instantaneous.

SQL 操作-按條件查詢

hive> SELECT a.foo FROM invites a WHERE a.ds='<DATE>';

將查詢數據輸出至目錄

hive> INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a WHERE a.ds='<DATE>';

將查詢結果輸出至本地目錄

hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/local_out' SELECT a.* FROM pokes a;

選擇所有列到本地目錄

hive> INSERT OVERWRITE TABLE events SELECT a.* FROM profiles a;
hive> INSERT OVERWRITE TABLE events SELECT a.* FROM profiles a WHERE a.key < 100;
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/reg_3' SELECT a.* FROM events a;
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_4' select a.invites, a.pokes FROM profiles a;
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_5' SELECT COUNT(1) FROM invites a WHERE a.ds='<DATE>';
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_5' SELECT a.foo, a.bar FROM invites a;
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/sum' SELECT SUM(a.pc) FROM pc1 a;

將一個表的統計結果插入另一個表中

hive> FROM invites a INSERT OVERWRITE TABLE events SELECT a.bar, count(1) WHERE a.foo > 0 GROUP BY a.bar;
hive> INSERT OVERWRITE TABLE events SELECT a.bar, count(1) FROM invites a WHERE a.foo > 0 GROUP BY a.bar;
JOIN
hive> FROM pokes t1 JOIN invites t2 ON (t1.bar = t2.bar) INSERT OVERWRITE TABLE events SELECT t1.bar, t1.foo, t2.foo;

將多表數據插入到同一表中

FROM src
INSERT OVERWRITE TABLE dest1 SELECT src.* WHERE src.key < 100
INSERT OVERWRITE TABLE dest2 SELECT src.key, src.value WHERE src.key >= 100 and src.key < 200
INSERT OVERWRITE TABLE dest3 PARTITION(ds='2008-04-08', hr='12') SELECT src.key WHERE src.key >= 200 and src.key < 300
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/dest4.out' SELECT src.value WHERE src.key >= 300;

將文件流直接插入文件

hive> FROM invites a INSERT OVERWRITE TABLE events SELECT TRANSFORM(a.foo, a.bar) AS (oof, rab) USING '/bin/cat' WHERE a.ds > '2008-08-09';
This streams the data in the map phase through the script /bin/cat (like hadoop streaming). Similarly - streaming can be used on the reduce side (please see the Hive Tutorial or examples)
實際示例
創建一個表
CREATE TABLE u_data (
userid INT,
movieid INT,
rating INT,
unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE;
下載示例數據文件,並解壓縮
wget http://www.grouplens.org/system/files/ml-data.tar__0.gz
tar xvzf ml-data.tar__0.gz
加載數據到表中
LOAD DATA LOCAL INPATH 'ml-data/u.data'
OVERWRITE INTO TABLE u_data;
統計數據總量
SELECT COUNT(1) FROM u_data;

現在做一些複雜的數據分析

創建一個 weekday_mapper.py: 文件,作爲數據按周進行分割
import sys
import datetime
for line in sys.stdin:
line = line.strip()
userid, movieid, rating, unixtime = line.split('\t')
生成數據的周信息
weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
print '\t'.join([userid, movieid, rating, str(weekday)])
使用映射腳本
//創建表,按分割符分割行中的字段值
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';
//將python文件加載到系統
add FILE weekday_mapper.py;
將數據按周進行分割
INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING 'python weekday_mapper.py'
AS (userid, movieid, rating, weekday)
FROM u_data;
SELECT weekday, COUNT(1)
FROM u_data_new
GROUP BY weekday;
處理Apache Weblog 數據
將WEB日誌先用正則表達式進行組合,再按需要的條件進行組合輸入到表中
add jar ../build/contrib/hive_contrib.jar;
CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\"[^\"]*\") ([^ \"]*|\"[^\"]*\"))?",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE;

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章