僞分佈安裝Hadoop2.8.0+Hbase1.3.1+Hive1.2.1+Kylin2.0

測試環境:centos6.5 + jdk1.8.0_131/

 1.hadoop 2.8.0安裝

1)下載hadoop-2.8.0

2)解壓縮到/opt/app/hadoop-2.8.0目錄下

3)僞分佈配置文件如下(僞分佈下一定要用localhost):

vi  /opt/app/hadoop-2.8.0/etc/hadoop/core-site.xml 

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/data</value>
    <description>namenode</description>
</property>
</configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
</property>
<property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/hadoopdata/name</value>
</property>
<property>
        <name>dfs.datanode.data.dir</name>
        <value>/usr/hadoopdata/data</value>
</property>
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>localhost:50090</value>
</property>
</configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->

    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
 <property>
        <name>mapreduce.jobtracker.staging.root.dir</name>
        <value>/home/hadoop/data/mapred/staging</value>
</property>
  <property>
   <name>yarn.app.mapreduce.am.staging-dir</name>
    <value>/home/hadoop/data/mapred/staging</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>localhost</value>
  </property>
</configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml

<configuration>
<property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xms2000m -Xmx4600m</value>
</property>
<property>
    <name>mapreduce.map.memory.mb</name>
    <value>5120</value>
</property>
<property>
    <name>mapreduce.reduce.input.buffer.percent</name>
    <value>0.5</value>
</property>
 <property>
   <name>mapreduce.reduce.memory.mb</name>
   <value>2048</value>
 </property>
<property>
    <name>mapred.tasktracker.reduce.tasks.maximum</name>
    <value>2</value>
</property>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
     <name>mapreduce.jobhistory.address</name>
     <value>localhost:10020</value>
  </property>
  <property>
     <name>yarn.app.mapreduce.am.staging-dir</name>
     <value>/home/hadoop/data/mapred/staging</value>
  </property>
  <property>
     <name>mapreduce.jobhistory.intermediate-done-dir</name>
     <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
  </property>
  <property>
     <name>mapreduce.jobhistory.done-dir</name>
     <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
  </property>
</configuration>


vi  /opt/app/hadoop-2.8.0/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/opt/app/jdk1.8.0_131

 2.hbase1.3.1安裝

1)下載hbase1.3.1

2)解壓縮到/opt/app/hbase-1.3.1下

3)僞分佈下,配置文件如下(zookeeper用hbase自帶):

 vi  /opt/app/hbase-1.3.1/conf/hbase-site.xml

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
</property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
  </property>
</configuration>
 vi  /opt/app/hbase-1.3.1/conf/regionservers

localhost
3.安裝mysql
安裝步驟略:可參考如下文章:

http://blog.csdn.net/cuker919/article/details/46481427

需要注意的是:

1)先卸載在帶的mysql

2)最好將mysql安裝在/usr/local/mysql目錄下,否則有很多不必要的麻煩要做

3)建立hive用戶,便於hive使用mysql數據庫

4.安裝hive1.2.1

1)下載hive1.2.1

2)解壓縮 /opt/app/apache-hive-1.2.1-bin/下

3)主要配置文件如下:(注意拷貝mysql驅動到hive lib報下)

 vi /opt/app/apache-hive-1.2.1-bin/conf/

export JAVA_HOME=/opt/app/jdk1.8.0_131
export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin/
export HBASE_HOME=/opt/app/hbase-1.3.1
export HIVE_AUX_JARS_PATH=/opt/app/apache-hive-1.2.1-bin/lib
export HIVE_CLASSPATH==/opt/app/apache-hive-1.2.1-bin/conf
片段: vi /opt/app/apache-hive-1.2.1-bin/conf/hive-site.xml 

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>


<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use against metastore database</description>
</property>


<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
  <description>password to use against metastore database</description>
</property>


<property>
    <name>hive.exec.scratchdir</name>
    <value>/tmp/hive</value>    
</property>  
<property>    
    <name>hive.exec.local.scratchdir</name>
    <value>/home/hive/iotmp</value>   
</property>  
<property>    
    <name>hive.downloaded.resources.dir</name>
    <value>/home/hive/iotmp</value>
</property> 
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
</property>
  <property>
    <name>hive.querylog.location</name>
    <value>/home/hive/iotmp</value>
    <description>Location of Hive run time structured log file</description>
  </property>
<property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value>
</property>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
4)需要將hive lib傳到 hdfs上 

 hadoop fs -put /opt/app/apache-hive-1.2.1-bin/lib/* /opt/app/apache-hive-1.2.1-bin/lib/

5.安裝Kylin2.0
1) 下載apache-kylin-2.0.0-bin

2) /opt/app/apache-kylin-2.0.0-bin/ 下

3)主要配置目錄如下:

vi /opt/app/apache-kylin-2.0.0-bin/bin/find-hive-dependency.sh 

hive_conf_path=$HIVE_HOME/conf
hive_exec_path=$HIVE_HOME/lib/hive-exec-1.2.1.jar
vi /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh

修改HBASE_CLASSPATH_PREFIX,增加hive_dependency

 export HBASE_CLASSPATH_PREFIX=${KYLIN_HOME}/conf:${KYLIN_HOME}/lib/*:${KYLIN_HOME}/ext/*:${hive_dependency}:${HBASE_CLASSPATH_PREFIX}
6.配置vi /etc/profile

## set java
export JAVA_HOME=/opt/app/jdk1.8.0_131
PATH=$PATH:/$JAVA_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:/opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml
JRE_HOME=$JAVA_HOME/jre
export HADOOP_HOME=/opt/app/hadoop-2.8.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH=$PATH:$HADOOP_HOME/lib
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin
export HCAT_HOME=$HIVE_HOME/hcatalog
export HIVE_CONF=$HIVE_HOME/conf
PATH=$PATH:$HIVE_HOME/bin:$PATH
export HBASE_HOME=/opt/app/hbase-1.3.1
PATH=$PATH:$HBASE_HOME/bin:$PATH
#export HIVE_CONF=/opt/app/apache-hive-1.2.1-bin/conf
#PATH=$PATH:$HIVE_HOME/bin:$PATH
export KYLIN_HOME=/opt/app/apache-kylin-2.0.0-bin
PATH=$PATH:$KYLIN_HOME/bin:$PATH
#export KYLIN_HOME=/opt/app/kylin/
source /etc/profile使生效

7.配置vi /etc/profile

首先查看hostname

[root@CentOS65x64 mysql]# hostname
CentOS65x64.localdomain


將hostname(CentOS65x64.localdomain) 與127.0.0.1 映射,否則僞分佈下,zookeeper可能啓動不起來


8.配置完成,啓動

service mysql start
/opt/app/hadoop-2.8.0/sbin/start-all.sh
/opt/app/hadoop-2.8.0/sbin/mr-jobhistory-daemon.sh start historyserver
/opt/app/hbase-1.3.1/bin/start-hbase.sh 
 nohup hive --service metastore > /home/hive/metastore.log 2>&1 &
/opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh start

注意紅色字部分,不要忘記啓動,負責kylin執行cube會報找不到hive-meta-1.2.1.jar錯誤。


9.關閉

/opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh stop
/opt/app/hadoop-2.8.0/sbin/stop-all.sh
/opt/app/hbase-1.3.1/bin/stop-hbase.sh

其他的應用用jps查看,用 kill -9 進程號殺死。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章