前提 hadoop 集羣已經啓動並搭建完畢,mysq已經安裝完畢
1.解壓hive
tar -zvxf apache-hive-0.14.0-bin.tar.gz -C /usr/local/
mv apache-hive-0.14.0-bin/ hive
2備份配置文件
cp hive-env.sh.template hive-env.sh
cp hive-default.xml.template hive-site.xml
3修改hive-env.sh
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/usr/local/hive
4.修改hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop0:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/local/hive/tmp</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/tmp</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive-0.14.0/tmp</value>
</property>
若元數據庫不存在 自動創建
<!--auto create-->
<property>
<name>datanucleus.readOnlyDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
5、拷貝mysql驅動到$HIVE_HOME/lib目錄下
cp mysql-connector-java-5.1.17.jar $HIVE_HOME/lib/
6、啓動Hive進入hive cli終端
[root@hadoop0 bin]cd $HIVE_HOME/bin
[root@hadoop0 bin]./hive或者
[root@hadoop0 bin]./hive --service cli
7.hive對外提供 metastore服務
修改 hive-site.xml 添加如下屬性
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop:9083</value>
<description>Thrift URI for the remote metastore. ...</description>
</property>
啓動 metastore服務
hive –service metastore &
啓動 server服務
hive –service hiveserver2 &