目錄
-
前提
1.安裝啓動Hadoop
2.安裝啓動zookeeper
3.安裝spark
-
依賴環境
軟件 |
版本 |
Apache hbase-1.1.1-bin.tar.gz |
1.1.1 |
spark-2.2.0-bin-2.6.0-cdh5.14.0.tgz |
2.2.0-bin-cdh5.14.0 |
apache-kylin-2.6.3-bin-hbase1x.tar.gz |
2.6.3 |
-
集羣規劃
我們當前只需要把Kylin配置到一個節點
主機名 |
IP |
守護進程 |
node1 |
192.168.88.120 |
NameNode DataNode RunJar(Hive metastore) RunJar(Hive hiveserver2) QuorumPeerMain HMaster HRegionServer kylin NodeManager |
node2 |
192.168.88.121 |
SecondaryNameNode JobHistoryServer DataNode HRegionServer QuorumPeerMain ResourceManager HistoryServer NodeManager |
node3 |
192.168.88.122 |
HRegionServer NodeManager DataNode QuorumPeerMain |
注意:
1.kylin-2.6.3-bin-hbase1x所依賴的hbase爲1.1.1版本
2.要求hbase的hbase.zookeeper.quorum值必須只能是node1,node02,不允許出現node01:2181
-
安裝依賴的Hbase1.1.1
上傳並解壓
tar -zxvf /export/soft/hbase-1.1.1-bin.tar.gz -C /export/servers/
修改hbase-env.sh 添加JAVA_HOME環境變量
cd /export/servers/hbase-1.1.1/conf/
vim ./hbase-env.sh
#JAVA_HOME環境需要提前配置好
export JAVA_HOME=${JAVA_HOME}
export HBASE_MANAGES_ZK=false
修改hbase-site.xml配置
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://node01:8020/hbase_1.1.1</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!-- 0.98後的新變動,之前版本沒有.port,默認端口爲60000 -->
<property>
<name>hbase.master.port</name>
<value>16000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node01,node02,node03</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/export/servers/zookeeper-3.4.5-cdh5.14.0/zkdata</value>
</property>
<property>
<name>hbase.thrift.support.proxyuser</name>
<value>true</value>
</property>
<property>
<name>hbase.regionserver.thrift.http</name>
<value>true</value>
</property>
</configuration>
在hbase conf文件夾中獲取Hadoop上的core-site.xml和hdfs-site.xml文件
cd /export/servers/hbase-1.1.1/conf/
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/hdfs-site.xml ./
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/core-site.xml ./
修改regionservers配置文件
cd /export/servers/hbase-1.1.1/conf/
vim regionservers
node01
配置hbase環境變量
cd /etc/profile.d/
vim ./hbase.sh
export HBASE_HOME=/export/servers/hbase-1.1.1
export PATH=$PATH:$HBASE_HOME/bin
刷新環境變量
source /etc/profile
刪除zk歷史數據
# 進入到 zkCli中
/export/servers/zookeeper-3.4.5-cdh5.14.0/bin/zkCli.sh
# 執行刪除
rmr /hbase
啓動hbase
cd /export/servers/hbase-1.1.1/bin/
start-hbase.sh
驗證
#進入hbase窗口
hbase shell
#查看當前數據庫
list
-
Kylin安裝部署
上傳並解壓
tar -zxvf /export/soft/apache-kylin-2.6.3-bin-hbase1x.tar.gz -C /export/servers/
拷貝hadoop\hive\hbase\spark核心配置文件到kylin的conf目錄
cd /export/servers/apache-kylin-2.6.3-bin-hbase1x/conf/
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/hdfs-site.xml ./
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/core-site.xml ./
cp /export/servers/hive-1.1.0-cdh5.14.0/conf/hive-site.xml ./
cp /export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/conf/spark-defaults.conf.template ./
mv ./spark-defaults.conf.template ./spark-defaults.conf
添加hadoop\hive\hbase\spark路徑到bin/kylin.sh
cd /export/servers/apache-kylin-2.6.3-bin-hbase1x/bin/
vim ./kylin.sh
export HADOOP_HOME=/export/servers/hadoop-2.6.0-cdh5.14.0
export HIVE_HOME=/export/servers/hive-1.1.0-cdh5.14.0
export HBASE_HOME=/export/servers/hbase-1.1.1
export SPARK_HOME=/export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0
配置spark環境變量
cd /etc/profile.d/
vim ./spark.sh
export SPARK_HOME=/export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0
export PATH=:$SPARK_HOME/bin:$PATH
配置conf/kylin.properties
cd /export/servers/apache-kylin-2.6.3-bin-hbase1x/conf
vim kylin.properties
kylin.engine.spark-conf.spark.eventLog.dir=hdfs://node01/apps/spark2/spark-history
kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs://node01:8020/apps/spark2/spark-history
kylin.engine.spark-conf.spark.yarn.archive=hdfs://node01:8020/apps/spark2/lib/spark-libs.jar
初始化kylin在hdfs上的數據路徑
hdfs dfs -mkdir -p /apps/kylin
啓動集羣
1.啓動zookeeper
2.啓動HDFS
3.啓動YARN集羣
4.啓動HBase集羣
5.啓動 metastore
nohup hive --service metastore &
6.啓動 hiverserver2
nohup hive --service hiveserver2 &
7.啓動Yarn history server
mr-jobhistory-daemon.sh start historyserver
8.啓動spark history server【可選】
sbin/start-history-server.sh
9.啓動kylin
cd /export/servers/apache-kylin-2.6.3-bin-hbase1x/bin/ ./kylin.sh start
登錄Kylin
http://192.168.100.201:7070/kylin/models
url |
|
默認用戶名 |
ADMIN |
默認密碼 |
KYLIN |