Hive2.0 在 Hadoop2.7部署 (2017.03添加異常處理)(圖文解說)

1 下載解壓


2.安裝Mysql,

MYSQL的安裝略《參照上一篇Mysql部署》

        安裝好mysql並配置好了之後,還要將連接mysql的驅動:mysql-connector-java-5.1.41.jar 拷貝到HiveHome 目錄下的lib文件夾中,這樣Hive纔可能成功連接mysql。

3.創建hive用戶

  1.	service mysql start
   
  2.	mysql -u root -p
  
  3.    CREATE USER hive' IDENTIFIED BY 'hive'; 

  4.      GRANT ALL PRIVILEGES ON *.*  TO 'hive'@'172.16.11.222' IDENTIFIED BY 'hive';
 
  5.      FLUSH PRIVILEGES;

  6.      create database hive;

4.安裝Hive2.0:(hadoop的namenode上

tar -zxvf apache-hive-2.0.0-bin.tar.gz

vim /etc/profile



cd  /home/hive2.0/conf

cp hive-default.xml.template hive-site.xml

5.修改配置文件:hive-site.xml (原有的配置中有默認值必須一一對應改正

<property>

      <name>javax.jdo.option.ConnectionURL</name>

            <value>jdbc:mysql://mach40:3306/hive?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false</value>

            <description>JDBC connect string for a JDBC metastore</description>

           </property>

    <property>

          <name>javax.jdo.option.ConnectionDriverName</name>

          <value>com.mysql.jdbc.Driver</value>

          <description>Driver class name for a JDBC metastore</description>

    </property>

<property>

          <name>javax.jdo.option.ConnectionUserName</name>

          <value>hive</value>

 <description>username to use against metastore database</description>

  </property>

  <property>

          <name>javax.jdo.option.ConnectionPassword</name>

           <value>hive</value>

           <description>password to use against metastore database</description>

 </property>

  <property>

    <name>hive.querylog.location</name>

    <value>$HIVE_HOME/iotmp</value>     //$HIVE_HOME這裏還是寫成絕對路徑比較好,(linux可以這樣寫,但在centos中這樣寫就出錯了)

  </property>

  <property>

    <name>hive.exec.scratchdir</name>

    <value>/tmp/hive</value>

    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission.Foreachconnectinguser,anHDFSscratchdir:${hive.exec.scratchdir}/<username>iscreated,with${hive.scratch.dir.permission}.</description>

  </property>

  <property>

    <name>hive.exec.local.scratchdir</name>

    <value>$HIVE_HOME/iotmp</value>

    <description>Local scratch space for Hive jobs</description>

  </property>

  <property>

    <name>hive.downloaded.resources.dir</name>

    <value>$HIVE_HOME/iotmp</value>

  <description>Temporary local directory for added resources in the remote file system.</description>

  </property>


//以下是spark sql 中需要添加的相關東西


<property>
  <name>hive.metastore.uris</name>
  <value>thrift://mach40:9083</value>
 <description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
  
    <property>
    <name>hive.server2.thrift.min.worker.threads</name>
    <value>5</value>
    <description>Minimum number of Thrift worker threads</description>
  </property>
 
  <property>
    <name>hive.server2.thrift.max.worker.threads</name>
    <value>500</value>
    <description>Maximum number of Thrift worker threads</description>
  </property>
 
  <property>
    <name>hive.server2.thrift.port</name>
    <value>10000</value>
    <description>Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>
  </property>
 
  <property>
    <name>hive.server2.thrift.bind.host</name>
    <value>mach42</value>
    <description>Bind host on which to run the HiveServer2 Thrift interface.Can be overridden by setting$HIVE_SERVER2_THRIFT_BIND_HOST</description>
  </property>


添加:於hbase 整合:


  1. <property>
  2.   <name>hive.aux.jars.path</name>
  3.   <value>file:///home/hive1.22/lib/hive-hbase-handler-1.2.2.jar,file:///home/hive1.22/lib/protobuf-java-2.5.0.jar,file:///home/hive1.22/lib/hbase-client-1.2.5.jar,file:///home/hive1.22/lib/hbase-common-1.2.5.jar,file:///home/hive1.22/lib/zookeeper-3.4.5.jar,file:///home/hive1.22/lib/guava-14.0.1.jar</value>
  4. </property>



創建目錄:

cd /home/hive2.0

mkdir iotmp

 

6,格式化,掛載mysqlHive數據:

/home/hive2.0/bin/schematool -initSchema -dbType mysql

 


成功的時候



啓動hive服務:



 

 

 

 

 

異常情況:上述配置中已經添加處理方式

 

 

 

處理方案(上面配置):

 

hive-site.xml中改修改以下幾項:

 

hive.querylog.location

hive.exec.local.scratchdir

hive.downloaded.resources.dir

 

詳見上述參數


異常情況2:



處理方式:

“Error: Duplicate key name 'PCS_STATS_IDX'”  

這是由於之前曾經格式化一次,或者有表未導入,mysql中的hive庫中有殘留的數據,殘留的表,將mysql中的hive庫刪掉重新創建,或者刪掉hive中的表;再次格式化


異常情況3:



處理方式:


這個比較常見,原因就是沒有把驅動包放到hive的“lib”目錄中


 

操作測試:(帶hadoop界面效果圖)

 

 

hive

create table test0 (id int ,name int );

可以

再開終端2

hive

show tables

有test0


我們去hadoop界面看看:



test0


再開終端3

mysql

show databases

use hive

show tables

select * from TBLS;

看到test0表。

因爲mysql存儲metastore相比derby可以多用戶登錄。

所以可以hive shellhwi同時使用了。




發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章