單機版Impala

前提安裝Hadoop Hive 詳見 https://blog.csdn.net/jing_er_/article/details/106664707

1.下載rpm文件http://archive.cloudera.com/beta/impala-kudu/redhat/7/x86_64/impala-kudu/0/RPMS/x86_64/

2.下載依賴包 bigtop-utils 

http://archive.cloudera.com/cdh5/redhat/7/x86_64/cdh/5.9.0/RPMS/noarch/bigtop-utils-0.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.30.el7.noarch.rpm

3.安裝

yum install mysql-connector-java

sudo yum -y install cyrus-sasl-plain lsb ntp

rpm -ivh bigtop-utils-0.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.30.el7.noarch.rpm

rpm -ivh impala-kudu-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm --nodeps

rpm -ivh impala-kudu-catalog-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-state-store-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-server-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-shell-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-udf-devel-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

4.配置

vi /etc/default/bigtop-utils

           export JAVA_HOME=/usr/local/jdk1.8.0_211

vi /etc/default/impala

           IMPALA_CATALOG_SERVICE_HOST=192.168.2.111

           IMPALA_STATE_STORE_HOST=192.168.2.111

systemctl restart ntpd

複製配置文件

cp /home/data/hive/apache-hive-3.1.2-bin/conf/hive-site.xml /etc/impala/conf.dist/

cp /home/data/hadoop/hadoop-3.2.1/etc/hadoop/core-site.xml /etc/impala/conf.dist/

cp /home/data/hadoop/hadoop-3.2.1/etc/hadoop/hdfs-site.xml /etc/impala/conf.dist/

增加配置

# hdfs-site.xml

<!--impala configuration -->

<property>

        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

        <value>true</value>

</property>

<property>

        <name>dfs.block.local-path-access.user</name>

        <value>impala</value>

</property>

<property>

        <name>dfs.client.file-block-storage-locations.timeout.millis</name>

        <value>60000</value>

</property>

# core-site.xml

<!--impala configuration -->

<property>

        <name>dfs.client.read.shortcircuit</name>

        <value>true</value>

</property>

<property>

        <name>dfs.client.read.shortcircuit.skip.checksum</name>

        <value>false</value>

</property>

<property>

        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

        <value>true</value>

</property>

修改hive-site.xml

<property>

<name>hive.metastore.uris</name>

<value>thrift://192.168.2.111:9083</value>

</property>

<property>

<name>hive.metastore.client.socket.timeout</name>

<value>3600</value>

</property>

重啓hadoop

/home/data/hadoop/hadoop-3.2.1/sbin 關閉 stop-all.sh start-all.sh

#啓動hive

#nohup hive --service metastore &

#nohup hive --service hiveserver2 &

/etc/init.d/impala-state-store start

/etc/init.d/impala-catalog start

/etc/init.d/impala-server start

驗證:impala-shell

參考https://blog.csdn.net/lukabruce/article/details/82970502#1%20%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章