Hive 安裝 python thrift 連接hiserver2

1. 軟件列表

apache-hive-1.1.1-bin.tar.gz

mysql-5.6.16.tar.gz

mysql-connector-java-5.6-bin.jar

sasl-0.2.1.tar.gz

thrift-0.10.0.zip

pyhs2-0.6.0.tar.gz

2. 安裝

1)規劃 

在現有的spark集羣上安裝,參加《Spark 安裝

192.168.200.171 安裝mysql數據庫、hive metastore server 和hive thrift server

192.168.200.170 hive client

2) mysql 安裝

本想用rpm包直接安裝的,但是太大了,下載要300多兆,就下載了源碼編譯安裝。安裝步驟如下。安裝前先用yum remove 刪除已安裝的mysql。

#!/bin/bash
MYSQL_PASSWD="123456"
#-------------------------------------------
# Step 1
#-------------------------------------------
yum remove -y mysql mysql-server
yum install -y make gcc-c++ cmake bison-devel ncurses-devel
cd /src_mysql
tar xvf mysql-5.6.16.tar .gz
cd mysql-5.6.16
#------------------------------------------
# Step 2
#------------------------------------------
useradd -M -s /sbin/nologin mysql
cmake \
-DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci \
-DWITH_EXTRA_CHARSETS=all \
-DWITH_MYISAM_STORAGE_ENGINE=1\
-DWITH_INNOBASE_STORAGE_ENGINE=1\
-DWITH_MEMORY_STORAGE_ENGINE=1\
-DWITH_READLINE=1\
-DENABLED_LOCAL_INFILE=1\
-DMYSQL_DATADIR=/usr/local/mysql/data \
-DMYSQL-USER=mysql
make -j 4 && make install
#-------------------------------------
# Step 3
#-------------------------------------
cd && chown -R mysql:mysql /usr/local/mysql/
cp /usr/local/mysql/support-files/my-default.cnf /etc/my.cnf
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
sed -i 's%^basedir=%basedir=/usr/local/mysql%' /etc/init.d/mysqld
sed -i 's%^datadir=%datadir=/usr/local/mysql/data%' /etc/init.d/mysqld
chkconfig mysqld on
/usr/local/mysql/scripts/mysql_install_db \
--defaults-file=/etc/my.cnf \
--basedir=/usr/local/mysql/ \
--datadir=/usr/local/mysql/data/ \
--user=mysql
ls /usr/local/mysql/data/
ln -s /usr/local/mysql/bin/* /bin/
service mysqld start
#-------------------------------------
# Step 4
#-------------------------------------
echo "now let's begin mysql_secure_installation "
if [ ! -e /usr/bin/expect ]
then yum install expect -y
fi
echo '#!/usr/bin/expect
set timeout 60
set password [lindex $argv 0]
spawn mysql_secure_installation
expect {
"enter for none" { send "\r"; exp_continue}
"Y/n" { send "Y\r" ; exp_continue}
"password" { send "$password\r"; exp_continue}
"Cleaning up" { send "\r"}
}
interact ' > mysql_secure_installation.exp
chmod +x mysql_secure_installation.exp
./mysql_secure_installation.exp $MYSQL_PASSWD
3)hive 安裝配置

修改.bashrc

export HIVE_HOME=/home/hadoop/cloud/hive
export PATH=$PATH:$HIVE_HOME/bin 

 登錄mysql創建數據庫   

CREATE DATABASE hive DEFAULT CHARACTER SET latin1  指定字符集latin1 否則會報錯,也可以用 alter database hive character set latin1; 進行修改。

解壓縮 apache-hive-1.1.1-bin.tar.gz

tar -xvf apache-hive-1.1.1-bin.tar.gz

mv apache-hive-1.1.1 hive

在hive/conf目錄下

cp hive-env.sh.template hive-env.sh

修改 hive-env.sh

# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
export HADOOP_HOME=/home/hadoop/cloud/hadoop

# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
export HIVE_CONF_DIR=/home/hadoop/cloud/hive/conf

cp hive-default.xml.template hive-site.xml

編輯hive-site.xml 修改以下內容

<property>
    <name>hive.metastore.uris</name>
    <value>thrift://bdml-m01:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>

<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>Username to use against metastore database</description>
  </property>
 
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive</value>
    <description>password to use against metastore database</description>
  </property>

 <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://bdml-m01:3306/hive?=createDatabaseIfNotExits=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>

<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/home/hadoop/cloud/hive/tmp</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>

啓動hive

nohup hive --service metastore &
nohup hive --servie hiveserver2 &

將軟件拷貝到192.168.200.170 啓動hive進行驗證

3.配置pyhs2

因爲使用的是hiserver2,需要配置pysh2。在安裝遇到問題的話,根據提示安裝相關的依賴包。

sasl-0.2.1.tar.gz

thrift-0.10.0.zip

pyhs2-0.6.0.tar.gz

測試程序

# -*- coding: utf-8 -*-

import pyhs2

class HiveClient(object):

    def __init__(self, db_host, user, password, database, port=10000, authMechanism="PLAIN"):
        self.conn = pyhs2.connect(host=db_host,port=port,authMechanism=authMechanism,
            user=user,password=password,database=database)

    def query(self, sql):
        with self.conn.cursor() as cursor:
            cursor.execute(sql)
            return cursor.fetch()

    def close(self):
        self.conn.close()


def main():
    hive_client = HiveClient(db_host='192.168.200.171', port=10000, user='hive', password='mypass',
        database='hive', authMechanism='PLAIN')
    result = hive_client.query('select * from sogouq2 limit 10')
    print(result)
    hive_client.close()


if __name__ == '__main__':
    main()  




發佈了96 篇原創文章 · 獲贊 26 · 訪問量 20萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章