上篇hadoop-ha僞分佈式平臺基於yarn,後續腳本需要看這裏
-
node1做mysql
-
node3做metastore server
-
node4做客戶端
-
安裝配置mysql(node1)
yum clean all
yum makecache
yum install mysql-server
開啓mysql 並開機啓動
service mysqld start
chkconfig mysqld on
進入mysql配置
use mysql;
delete from user;
grant all privileges on *.* to 'root'@'%'identified by '123'with grant option;
#給 所有權限 在所有數據庫所有的表 給 root用戶 來自所有的主機 密碼爲123
flush privileges;
#刷新權限
-
將hive的jar包通過遠程發送給node3、node4,mysql驅動包發給node3
-
解壓後配置環境變量(node3)
解壓
tar -zxvf apache-hive-1.2.1-bin.tar.gz
mv apache-hive-1.2.1-bin /opt/home/
配置環境變量/etc/profile
export HIVE_HOME=/opt/home/apache-hive-1.2.1-bin
export PATH=$PATH:$HIVE_HOME/bin
刷新環境變量
source /etc/profile
-
修改hive配置文件
位置:/opt/home/apache-hive-1.2.1-bin/conf
cp hive-default.xml.template hive-site.xml
修改hive-site.xml
:.,$-1d
#刪除到最後第二行爲止
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value> #存到hdfs上地址
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123</value>
</property>
-
將node3的mysql驅動jar包放置在apache-hive-1.2.1-bin/lib中
mv mysql-connector-java-5.1.32-bin.jar /opt/home/apache-hive-1.2.1-bin/lib/
-
將node3hive scp傳給node4,並配置環境變量(略)
scp -r apache-hive-1.2.1-bin/ node4:/opt/home/
-
修改node4 hive-site.xml文件,監聽node3的9083端口
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node3:9083</value> #監聽node3的9083端口
</property>
-
由於node4客戶端需要執行hive操作
(換句話說誰需要請求server誰就需要把jline進行替換),所以需要把hive中jline的jar包拷貝到hadoop的(/opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/)目錄的下,並把其中低版本的刪掉,不然會報錯。
cp lib/jline-2.12.jar /opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/
cd /opt/home/hadoop-2.6.5/share/hadoop/yarn/lib/ rm -f jline-0.9.94.jar
-
啓動
node3節點中
hive --service metastore
node4節點中
hive
-
修改開啓腳本
#!/bin/bash
echo "start all zookeeper.."
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh start";
done
start-all.sh
for i in {3..4};
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh start resourcemanager";
done
ssh node3 "source /etc/profile;nohup hive --service metastore >>/dev/null 2>&1 &"
#開始hive服務器端腳本靜默模式
-
修改關閉腳本
#!/bin/bash
echo "stop all zookeeper.."
stop-all.sh
for i in {3..4};
do
ssh node$i "/opt/home/hadoop-2.6.5/sbin/yarn-daemon.sh stop resourcemanager";
done
for i in {2..4};
do
ssh node$i "/opt/home/zookeeper-3.4.6/bin/zkServer.sh stop";
done
ssh node3 "source /etc/profile;jps |grep RunJar|awk '{print \$1}'|xargs kill -9"
#ssh 遠程根據名字殺死進程