Kylin-2.5.0安裝-詳細教程

Kylin安裝文檔

安裝包版本

  • 系統:CentOS7
  • jdk:jdk-8u191-linux-x64
  • Hadoop:hadoop-2.9.2.tar
  • hbase:hbase-1.2.7-bin.tar
  • hive: apache-hive-1.2.1-bin.tar
  • Kylin:apache-kylin-2.4.0-bin-hbase1x.tar
  • spark:spark-2.4.3-bin-without-hadoop.tgz
  • zookeeper:zookeeper-3.4.6.tar
  • mysql:mysql57-community-release-el7-10.noarch(用作替換hive的derby存儲元數據)
  • sqoop:sqoop-1.4.7.bin__hadoop-2.6.0.tar(向hive中導入數據)

安裝操作系統

操作系統配置

配置網絡

# ip a 查看當前的服務器網絡設置
vi /etc/sysconfig/network-scripts/ifcfg-ens33
# 將配置文件中的ONBOOT=yes
systemctl restart network

關閉防火牆

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

修改服務器主機名

[root@localhost ~]# vi /etc/hostname
自定義主機名

配置主機名和ip地址的映射關係


[root@localhost ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.19.134 Kylin2
~


[root@localhost ~]# ping Kylin
PING Kylin2 (192.168.19.134) 56(84) bytes of data.
64 bytes from Kylin2 (192.168.19.134): icmp_seq=1 ttl=64 time=0.020 ms
64 bytes from Kylin2 (192.168.19.134): icmp_seq=2 ttl=64 time=0.025 ms

配置SSH(Secure Shell)免密遠程登錄

[root@localhost ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:tIejCLlCW+iQUwWzOWFqZde2smsz8UKrA2sKgrdVoSs root@localhost
The key's randomart image is:
+---[RSA 2048]----+
|  =+...          |
| oo*.  o         |
|..=   o o        |
|.o.o o + o       |
|+oo.. + S .      |
|=ooo B . o       |
|=oE = *          |
|o= = B .         |
|+ ..+ +          |
+----[SHA256]-----+
[root@localhost ~]# cd .ssh/
[root@localhost .ssh]# ll
總用量 8
-rw-------. 1 root root 1675 1月  14 09:16 id_rsa
-rw-r--r--. 1 root root  396 1月  14 09:16 id_rsa.pub
[root@localhost .ssh]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost .ssh]# ll
總用量 12
-rw-r--r--. 1 root root  396 1月  14 09:16 authorized_keys
-rw-------. 1 root root 1675 1月  14 09:16 id_rsa
-rw-r--r--. 1 root root  396 1月  14 09:16 id_rsa.pub
[root@localhost .ssh]# chmod 0600 ~/.ssh/authorized_keys
[root@localhost .ssh]# ssh Kylin2
The authenticity of host 'kylin2 (192.168.19.134)' can't be established.
ECDSA key fingerprint is SHA256:h0KM6u7P3rzKiEPjNWO7H6FNRXtvRRpWgBs2aHJu2VU.
ECDSA key fingerprint is MD5:33:7d:02:f1:7c:61:86:74:f2:32:d0:a9:c7:42:46:bd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kylin2,192.168.19.134' (ECDSA) to the list of known hosts.
Last login: Tue Jan 14 09:11:34 2020 from 192.168.19.1

安裝相關軟件

上傳文件

在這裏插入圖片描述

安裝jdk


[root@Kylin2 opt]# rpm -ivh jdk-8u191-linux-x64.rpm
警告:jdk-8u191-linux-x64.rpm: 頭V3 RSA/SHA256 Signature, 密鑰 ID ec551f03: NOKEY
準備中...                          ################################# [100%]
正在升級/安裝...
   1:jdk1.8-2000:1.8.0_191-fcs        ################################# [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
        
[root@Kylin2 opt]# java -version
java version "1.8.0_191"
Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)

解壓相應軟件

[root@Kylin2 opt]# tar -zxf hadoop-2.7.1.tar.gz -C install/
[root@Kylin2 opt]# tar -zxf hbase-1.2.5-bin.tar.gz -C install/
[root@Kylin2 opt]# tar -zxf apache-hive-1.2.1-bin.tar.gz -C install/
[root@Kylin2 opt]# tar -zxf apache-kylin-2.4.0-bin-hbase1x.tar.gz -C install/
[root@Kylin2 opt]# tar -zxf zookeeper-3.4.6.tar -C install/
[root@Kylin2 opt]# tar -zxf sqoop-1.4.7.bin__hadoop-2.6.0.ta -C install/
[root@Kylin2 opt]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C install/

在這裏插入圖片描述

配置環境變量

[root@Kylin ~]# vi .bashrc

HBASE_MANAGES_ZK=false
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/opt/install/hadoop-2.9.2
HIVE_HOME=/opt/install/hive-1.2.1
HBASE_HOME=/opt/install/hbase-1.2.7
KYLIN_HOME=/opt/install/kylin-2.5.0
SPARK_HOME=/opt/install/spark-2.4.3
SQOOP_HOME=/opt/install/sqoop-1.4.7
CLASSPATH=.
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:$KYLIN_HOME/bin:$SPARK_HOME/bin:$SQOOP_HOME/bin
export HADOOP_HOME
export HIVE_HOME
export HBASE_HOME
export KYLIN_HOME
export SPARK_HOME
export SQOOP_HOME
export CLASSPATH
export HBASE_MANAGES_ZK

[root@Kylin2 ~]# source .bashrc



[root@Kylin ~]# vi /etc/profile
fi
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
##JAVA_HOME
export JAVA_HOME=/usr/java/latest
export PATH=$PATH:$JAVA_HOME/bin

##HADOOP_HOME
export HADOOP_HOME=/opt/install/hadoop-2.9.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

##HBASE_HOME
export HBASE_HOME=/opt/install/hbase-1.2.7
export PATH=$PATH:$HBASE_HOME/bin


##HIVE_HOME
export HIVE_HOME=/opt/install/hive-1.2.1
export PATH=$PATH:$HIVE_HOME/bin


##KYLIN_HOME
export KYLIN_HOME=/opt/install/kylin-2.5.0
export PATH=$PATH:$KYLIN_HOME/bin


##SPARK_HOME
export SPARK_HOME=/opt/install/spark-2.4.3
export PATH=$PATH:$SPARK_HOME/bin

##SQOOP_HOME
export SQOOP_HOME=/opt/install/sqoop-1.4.7
export PATH=$PATH:$SQOOP_HOME/bin

[root@Kylin2 ~]# source /etc/profile

安裝hadoop

  • 修改配置文件
[root@Kylin hadoop]# pwd
/opt/install/hadoop-2.9.2/etc/hadoop

[root@Kylin hadoop]# vim hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/latest


[root@Kylin hadoop]# vim core-site.xml
<!--nn訪問入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://kylin:9000</value>
</property>
<!--hdfs工作基礎目錄-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/install/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
</configuration>




[root@Kylin hadoop]# vim hdfs-site.xml
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主機-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>kylin:50090</value>
</property>
<!--設置datanode最大文件操作數-->
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
</property>
<!--設置datanode並行處理能力-->
<property>
    <name>dfs.datanode.handler.count</name>
    <value>6</value>
</property>



[root@Kylin hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@Kylin hadoop]# vim mapred-site.xml

<configuration>
<property>
    <name>mapreduce.framework.name</name>
     <value>yarn</value>
</property>
</configuration>
~




[root@Kylin hadoop]# vim yarn-site.xml

<!--配置MapReduce計算框架的核心實現Shuffle-洗牌-->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<!--配置資源管理器所在的目標主機-->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>kylin</value>
</property>
<!--關閉物理內存檢查-->
<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>
<!--關閉虛擬內存檢查-->
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>
<!--歷史服務器配置-->
<property>
<name>mapreduce.jobhistory.address</name>
 <value>kylin:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>kylin:19888</value>
</property>


  • namenode 格式化
[root@Kylin hadoop-2.9.2]# bin/hdfs namenode -format
  • 啓動hdfs
[root@Kylin hadoop-2.9.2]# sbin/start-dfs.sh
Starting namenodes on [Kylin]
Kylin: starting namenode, logging to /opt/install/hadoop-2.9.2/logs/hadoop-root-namenode-Kylin.out
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:h0KM6u7P3rzKiEPjNWO7H6FNRXtvRRpWgBs2aHJu2VU.
ECDSA key fingerprint is MD5:33:7d:02:f1:7c:61:86:74:f2:32:d0:a9:c7:42:46:bd.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: starting datanode, logging to /opt/install/hadoop-2.7.1/logs/hadoop-root-datanode-Kylin2.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:h0KM6u7P3rzKiEPjNWO7H6FNRXtvRRpWgBs2aHJu2VU.
ECDSA key fingerprint is MD5:33:7d:02:f1:7c:61:86:74:f2:32:d0:a9:c7:42:46:bd.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /opt/install/hadoop-2.9.2/logs/hadoop-root-secondarynamenode-Kylin.out


  • 啓動yarn
[root@Kylin hadoop-2.9.2]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/install/hadoop-2.9.2/logs/yarn-root-resourcemanager-Kylin.out
localhost: starting nodemanager, logging to /opt/install/hadoop-2.9.2/logs/yarn-root-nodemanager-Kylin.out
  • 啓動歷史服務器

    [root@kylin sbin]# mr-jobhistory-daemon.sh start historyserver
    starting historyserver, logging to /opt/install/hadoop-2.9.2/logs/mapred-root-historyserver-kylin.out
    
    
  • 驗證


[root@kylin sbin]# jps
84563 NodeManager
109427 Jps
84197 SecondaryNameNode
84421 ResourceManager
83801 NameNode
83976 DataNode
109196 JobHistoryServer

在這裏插入圖片描述

安裝Hive

  • 在hdfs上創建目錄
root@Kylin ~]# $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
[root@Kylin ~]# $HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
[root@Kylin ~]# $HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp
[root@Kylin ~]#  $HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse

在這裏插入圖片描述

  • 修改配置文件

    [root@Kylin conf]# vim hive-env.sh
    # Set HADOOP_HOME to point to a specific hadoop install directory
      HADOOP_HOME=/opt/install/hadoop-2.9.2
    
    # Hive Configuration Directory can be controlled by:
      export HIVE_CONF_DIR=/opt/install/hive-1.2.1/conf
    
    

切換MetaStore從derby到mysql

  • 安裝mysql

1. [root@Kylin opt]# wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
2. [root@Kylin opt]# yum -y install mysql57-community-release-el7-10.noarch.rpm
3. [root@Kylin opt]# yum -y install mysql-community-server
4. [root@Kylin opt]# systemctl start  mysqld.service   # 啓動mysql服務
5. mysql管理員密碼
   5.1 查看臨時密碼
   [root@Kylin opt]# grep "password" /var/log/mysqld.log
     A temporary password is generated for root@localhost: Z>juyDor2f#L

   5.2 使用臨時密碼登錄
    [root@Kylin opt]# mysql -uroot -pVqdIle,:l1b1

   5.3 修改密碼
       mysql> set global validate_password_policy=0;
       mysql> set global validate_password_length=1;
       mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
       mysql> exit;
   5.4 重啓mysql服務  
    [root@Kyli opt]# systemctl restart mysqld.service
    
 6. 打開mysql遠端訪問權限
    6.1  mysql> set global validate_password_policy=0;
    6.2  mysql> set global validate_password_length=1;
    6.3  mysql> GRANT ALL PRIVILEGES ON *.* TO root@"%" IDENTIFIED BY "123456";
    6.4  mysql> flush privileges;
    
7. [root@Kylin opt]# systemctl stop firewalld

  • 修改配置文件

    hive_home/conf/hive-site.xml
     <property>
            <name>javax.jdo.option.ConnectionURL</name>
            <value>jdbc:mysql://kylin:3306/hive_mysql?createDatabaseIfNotExist=true&amp;useSSL=false</value>
            <description>JDBC connect string fora JDBC metastore</description>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
            <description>Driver class name for aJDBC metastore</description>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>root</value>
            <description>username to use againstmetastore database</description>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>123456</value>
            <description>password to use againstmetastore database</description>
       </property>
       
       
      <property>
        <name>hive.exec.local.scratchdir</name>
        <value>/opt/install/hive-1.2.1/tmp/${user.name}</value>
        <description>Local scratch space for Hive jobs</description>
      </property>
      
      
       <property>
        <name>hive.downloaded.resources.dir</name>
        <value>/opt/install/hive-1.2.1/tmp/${hive.session.id}_resources</value>
        <description>Temporary local directory for added resources in the remote file system.</description>
      </property>
      
        <property>
        <name>hive.server2.logging.operation.log.location</name>
        <value>/opt/install/hive-1.2.1/tmp/operation_logs</value>
        <description>Top level directory where operation logs are stored if logging      functionality is enabled</description>
      </property>
    
  • mysql驅動jar 上傳 hive/lib

安裝Sqoop

  • 配置
[root@kylin conf]# vim sqoop-env.sh

#Set path to where bin/hadoop is available
 export HADOOP_COMMON_HOME=/opt/install/hadoop-2.9.2

#Set path to where hadoop-*-core.jar is available
 export HADOOP_MAPRED_HOME=/opt/install/hadoop-2.9.2

#set the path to where bin/hbase is available
 export HBASE_HOME=/opt/install/hbase-1.2.7

#Set the path to where bin/hive is available
 export HIVE_HOME=/opt/install/hive-1.2.1

#Set the path for where zookeper config dir is
 export ZOOCFGDIR=/opt/install/zookeeper-3.4.6

  • 將 mysql-connect.jar copy sqoop_home/lib
  • 測試sqoop是否可用

[root@kylin sqoop-1.4.7]#  bin/sqoop list-databases -connect jdbc:mysql://kylin:3306 -username root -password 123456
Warning: /opt/install/sqoop-1.4.7/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /opt/install/sqoop-1.4.7/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/install/sqoop-1.4.7/../zookeeper does not exist! Accumulo imports will fail.
 For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
information_schema
hive_mysql
mysql
performance_schema
sys

  • 將hive的lib下的所有文件copy到sqoop的lib下,否則無法對hive導入數據
  • 將hive/conf下的hive-site.xml複製到sqoop/conf下,否則無法識別hive 中的數據庫

安裝Zookeeper

  • 配置
[root@kylin zookeeper-3.4.6]# pwd
/opt/install/zookeeper-3.4.6
[root@kylin conf]# cp zoo_sample.cfg zoo.cfg
[root@kylin conf]# vi zoo.cfg
# 數據存放目錄
dataDir=/root/zkdata
  • 啓動服務
[root@kylin zookeeper-3.4.6]# bin/zkServer.sh start conf/zoo.cfg
  • 驗證服務
[root@kylin zookeeper-3.4.6]# jps
2548 QuorumPeerMain  # zk java進程
2597 Jps

[root@kylin zookeeper-3.4.6]# bin/zkServer.sh status conf/zoo.cfg
JMX enabled by default
Using config: conf/zoo.cfg
Mode: standalone  # 獨立

安裝Hbase

確保Zookeeper服務正常


[root@kylin hbase-1.2.7]# jps
84563 NodeManager
84197 SecondaryNameNode
84421 ResourceManager
83801 NameNode
83976 DataNode
114906 Jps
109196 JobHistoryServer
60878 QuorumPeerMain  # zk

  • 修改hbase-env.sh
[root@Kylin conf]# vim hbase-env.sh

# The java implementation to use.  Java 1.7+ required.
  export JAVA_HOME=/usr/java/latest

  • 修改hbase-site.xml
[root@Kylin conf]# vim hbase-site.xml

<property>
  <name>hbase.rootdir</name>
  <value>hdfs://kylin:9000/hbase</value>
</property>
<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>
<!--與zookeeper的連接設置-->
<property>
  <name>hbase.zookeeper.quorum</name>
  <value>kylin</value>
</property>
<property>
  <name>hbase.zookeeper.property.clientPort</name>
  <value>2181</value>
</property>

  • 啓動hbase
[root@Kylin hbase-1.2.7]# bin/start-hbase.sh

[root@kylin hbase-1.2.7]# jps
84563 NodeManager
116322 Jps
84197 SecondaryNameNode
84421 ResourceManager
61157 HMaster   # 管理節點
83801 NameNode
83976 DataNode
61325 HRegionServer # 存儲節點
109196 JobHistoryServer
60878 QuorumPeerMain


安裝Spark(基於yarn)

  • 配置
[root@kylin spark-2.4.3]# vi conf/spark-env.sh
HADOOP_CONF_DIR=/opt/install/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/opt/install/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES=4
SPARK_EXECUTOR_MEMORY=1g
SPARK_DRIVER_MEMORY=1g
LD_LIBRARY_PATH=/opt/install/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath)
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH
注意:這裏和standalone不同,用戶無需啓動start-all.sh服務,因爲任務的執行會交給YARN執行
[root@kylin spark-2.4.3]# ./bin/spark-shell
	--master yarn                 # 連接集羣的Master 
	--deploy-mode client          # Diver運行方式:必須是client
	--executor-cores 4            # 每個進程最多運行兩個Core
	--num-executors 2             # 分配2個Executor進程

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/09/25 00:14:40 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/09/25 00:14:43 WARN hdfs.DataStreamer: Caught exception
Spark context Web UI available at http://centos:4040
Spark context available as 'sc' (master = yarn, app id = application_1569341195065_0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

安裝Kylin

  • 配置kylin.properties 根據自己的需要
kylin.env.hadoop-conf-dir=/opt/install/hadoop-2.9.2/etc/hadoop
kylin.engine.spark-conf.spark.master=yarn
kylin.engine.spark-conf.spark.submit.deployMode=cluster
kylin.engine.spark-conf.spark.yarn.queue=default
kylin.engine.spark-conf.spark.driver.memory=2G
kylin.engine.spark-conf.spark.executor.memory=2G
kylin.engine.spark-conf.spark.yarn.executor.memoryOverhead=1024
kylin.engine.spark-conf.spark.executor.instances=2
kylin.engine.spark-conf.spark.executor.cores=1
kylin.engine.spark-conf.spark.shuffle.service.enabled=false
kylin.engine.spark-conf.spark.network.timeout=600
kylin.engine.spark-conf.spark.eventLog.enabled=true
kylin.engine.spark-conf.spark.hadoop.dfs.replication=2
kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress=true
kylin.engine.spark-conf.spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
kylin.engine.spark-conf.spark.eventLog.dir=hdfs:///kylin/spark-history
kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs:///kylin/spark-history

可以先執行bin/check-env.sh,一般來說配置了上面所述的環境變量,是可以通過check

通過後直接啓動即可

[root@Kylin kylin-2.4.0]# bin/kylin.sh start

web訪問:

http://1主機ip:7070/kylin/login

用戶名:ADMIN

密碼:KYLIN

在這裏插入圖片描述

安裝完成

注:在check是可能會出現的錯誤

  1. “Failed to create $WORKING_DIR. Please make sure the user has right to access $WORKING_DIR”

這是由於在hive創建目錄時賦予的權限不足,可以對文件夾重新賦予權限

  1. Failed to create $SPARK_HISTORYLOG_DIR. Please make sure the user has right to access $SPARK_HISTORY

    一般而言在CentOS 下不會出現此錯誤, 原因是get-properties.sh內容執行有問題。 ,修改這個文件

    源文件

    ## 原始文件
    if [ $# != 1 ]
    then
        echo 'invalid input'
        exit -1
    fi
    
    IFS=$'\n'
    result=
    for i in `cat ${KYLIN_HOME}/conf/kylin.properties | grep -w "^$1" | grep -v '^#' | awk -F= '{ n = index($0,"="); print substr($0,n+1)}' | cut -c 1-`
    do
       :
       result=$i
    done
    echo $result
    

    修改後的文件

    ## 修改後的文件
    if [ $# != 1 ]
    then
        echo 'invalid input'
        exit -1
    fi
    
    #IFS=$'\n'
    result=`cat ${KYLIN_HOME}/conf/kylin.properties | grep -w "^$1" | grep -v '^#' | awk -F= '{ n = index($0,"="); print substr($0,n+1)}' | cut -c 1-`
    #for i in `cat ${KYLIN_HOME}/conf/kylin.properties | grep -w "^$1" | grep -v '^#' | awk -F= '{ n = index($0,"="); print substr($0,n+1)}' | cut -c 1-`
    #do
    #   :
    #   result=$i
    #done
    echo $result
    
發佈了65 篇原創文章 · 獲贊 63 · 訪問量 2996
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章