Centos7.7 CDH6.2.1 安裝教程


一、CDH6.2.1 下載

  1. CDH下載地址

  2. 下載cdh
    https://archive.cloudera.com/cdh6/6.2.1/parcels/
    在這裏插入圖片描述

  3. 下載cm
    https://archive.cloudera.com/cm6/6.2.1/redhat7/yum/RPMS/x86_64/
    在這裏插入圖片描述

https://archive.cloudera.com/cm6/6.2.1/
在這裏插入圖片描述

  1. 下載好後目錄結構如下:
    在這裏插入圖片描述

二、環境配置

2.1 機器配置

ip hosts 節點
192.168.161.160 kino-cdh01 主機
192.168.161.161 kino-cdh02 從機
192.168.161.162 kino-cdh03 從機

2.2 配置 hosts

在臺機器上輸入: vim /etc/hosts

192.168.161.160 kino-cdh01
192.168.161.161 kino-cdh02
192.168.161.162 kino-cdh03

2.3 卸載自帶的jdk

[root@kino-cdh01 ~]# rpm -qa |grep jdk
java-1.8.0-openjdk-headless-1.8.0.222.b03-1.el7.x86_64
java-1.7.0-openjdk-headless-1.7.0.221-2.6.18.1.el7.x86_64
copy-jdk-configs-3.3-10.el7_5.noarch
java-1.8.0-openjdk-1.8.0.222.b03-1.el7.x86_64
java-1.7.0-openjdk-1.7.0.221-2.6.18.1.el7.x86_64

[root@kino-cdh01 ~]# rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.222.b03-1.el7.x86_64
[root@kino-cdh01 ~]# rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.221-2.6.18.1.el7.x86_64
[root@kino-cdh01 ~]# rpm -e --nodeps copy-jdk-configs-3.3-10.el7_5.noarch
[root@kino-cdh01 ~]# rpm -e --nodeps java-1.8.0-openjdk-1.8.0.222.b03-1.el7.x86_64
[root@kino-cdh01 ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.221-2.6.18.1.el7.x86_64

2.4 卸載自帶的 mariadb

[root@kino-cdh01 ~]# rpm -qa | grep -i mariadb | xargs rpm -e --nodeps

刪除殘留文件

[root@kino-cdh01 ~]# find / -name mysql | xargs rm -rf
[root@kino-cdh01 ~]# find / -name my.cnf | xargs rm -rf
[root@kino-cdh01 ~]# cd /var/lib/
[root@kino-cdh01 ~]# rm -rf mysql/

2.5 關閉防火牆

所有機器都要執行

[root@kino-cdh01 ~]# systemctl stop firewalld.service
[root@kino-cdh01 ~]# systemctl disable firewalld.service

三臺都關閉SELINUX,編輯/etc/selinux/config配置文件,把SELINUX的值改爲disabled

[root@kino-cdh01 ~]# vim /etc/selinux/config


SELINUX=disabled

2.6 配置免密登錄

所有機器都要執行

ssh-keygen -t rsa  # 直接回車

ssh-copy-id kino-cdh01   # 輸入 yes, 輸入密碼 
ssh-copy-id kino-cdh02   # 輸入 yes, 輸入密碼 
ssh-copy-id kino-cdh03   # 輸入 yes, 輸入密碼 

2.7 安裝jdk

[root@kino-cdh01 ~]# mkdir /usr/java

將放在服務器上的 jdk-8u181-linux-x64.tar.gz 解壓到 /usr/java 目錄下

[root@kino-cdh01 ~]# tar -zxvf /opt/software/jdk-8u181-linux-x64.tar.gz -C /usr/java/

將 /usr/java 分發到其他服務器

[root@kino-cdh01 ~]# scp -r /usr/java root@bigdata2:/usr/java
[root@kino-cdh01 ~]# scp -r /usr/java root@bigdata3:/usr/java

配置 JAVA_HOME 環境變量(所有的主機都需要)

[root@kino-cdh01 ~]# cat >> /etc/profile << EOF
> #JAVA_HOME
> export JAVA_HOME=/usr/java/jdk1.8.0_181
> export PATH=$PATH:$JAVA_HOME/bin
> EOF
[root@kino-cdh01 ~]# source /etc/profile
[root@kino-cdh01 ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

三、安裝 MySQL

3.1 解壓 mysql-5.7.26-1.el7.x86_64.rpm-bundle.tar

[root@kino-cdh01 ~]# tar -axvf /opt/software/mysql/mysql-5.7.26-1.el7.x86_64.rpm-bundle.tar

3.2 安裝 MySQL

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-common-5.7.26-1.el7.x86_64.rpm

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-libs-5.7.26-1.el7.x86_64.rpm

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-devel-5.7.26-1.el7.x86_64.rpm

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-libs-compat-5.7.26-1.el7.x86_64.rpm

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-client-5.7.26-1.el7.x86_64.rpm

[root@kino-cdh01 mysql]# rpm -ivh mysql-community-server-5.7.26-1.el7.x86_64.rpm

3.3 查看 MySQL 狀態

[root@kino-cdh01 mysql]# service mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html

3.4 啓動MySQL

[root@kino-cdh01 mysql]# systemctl start mysqld

3.5 查看 MySQL 狀態

[root@kino-cdh01 mysql]# service mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2020-05-09 00:25:59 CST; 37s ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 5386 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 5297 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 5389 (mysqld)
    Tasks: 27
   CGroup: /system.slice/mysqld.service
           └─5389 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid

5月 09 00:25:49 kino-cdh01 systemd[1]: Starting MySQL Server...
5月 09 00:25:59 kino-cdh01 systemd[1]: Started MySQL Server.

3.6 查看root隨機密碼(最後的是密碼)

[root@kino-cdh01 mysql]# grep 'temporary password' /var/log/mysqld.log
2020-05-08T16:25:54.253304Z 1 [Note] A temporary password is generated for root@localhost: .SdtPrX=L9TI

3.7 修改root登錄密碼

[root@kino-cdh01 mysql]# mysql -uroot -p
Enter password: 輸入上面的隨機密碼

mysql> SET PASSWORD FOR 'root'@'localhost'= "Kino123.";

在這裏插入圖片描述

3.8 設置root可以遠程登錄

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '上面設置的新密碼' WITH GRANT OPTION;

mysql> FLUSH PRIVILEGES;

mysql> exit;

3.9 設置MySql忽略大小寫:

用root登錄,打開並修改 /etc/my.cnf;在 [mysqld] 節點下,加入一行: lower_case_table_names=1

重啓MySql服務:systemctl restart mysqld

3.10 爲CM安裝mysql驅動

mysql-connector-java-5.1.27-bin.jar 拷貝到 新創建的 /usr/share/java 路徑下,並重命名爲 mysql-connector-java.jar

[root@kino-cdh01 mysql]# tar -zxvf mysql-connector-java-5.1.27.tar.gz

[root@kino-cdh01 java]# cp mysql-connector-java-5.1.27-bin.jar /usr/share/java

[root@kino-cdh01 java]# mv mysql-connector-java-5.1.27-bin.jar mysql-connector-java.jar

[root@kino-cdh01 java]# ll /usr/share/java
總用量 2216
lrwxrwxrwx. 1 root root      23 2月   4 20:47 icedtea-web.jar -> ../icedtea-web/netx.jar
lrwxrwxrwx. 1 root root      25 2月   4 20:47 icedtea-web-plugin.jar -> ../icedtea-web/plugin.jar
-rw-r--r--. 1 root root   62891 6月  10 2014 jline.jar
-rw-r--r--. 1 root root 1079759 8月   2 2017 js.jar
-rw-r--r--. 1 root root 1007505 5月   9 00:34 mysql-connector-java.jar
-rw-r--r--. 1 root root   18387 8月   2 2017 rhino-examples.jar
lrwxrwxrwx. 1 root root       6 2月   4 20:47 rhino.jar -> js.jar
-rw-r--r--. 1 root root   92284 3月   6 2015 tagsoup.jar

將該驅動發到每一臺服務器

[root@kino-cdh01 java]# scp -r /usr/share/java/mysql-connector-java.jar root@kino-cdh02:/usr/share/java/
mysql-connector-java.jar 				100%  984KB  25.3MB/s   00:00    

[root@kino-cdh01 java]# scp -r /usr/share/java/mysql-connector-java.jar root@kino-cdh03:/usr/share/java/
mysql-connector-java.jar     			100%  984KB  30.7MB/s   00:00 

四、安裝 CM

4.1 搭建本地 YUM 源

將壓縮包 cloudera-repos.tar.gz 解壓到 /var/www/html 路徑下

[root@kino-cdh01 ~]# mkdir -p /var/www/html
[root@kino-cdh01 ~]# tar -zxvf /opt/software/cloudera-repos.tar.gz.tar.gz -C /var/www/html

[root@kino-cdh01 html]# ll
總用量 0
drwxr-xr-x. 4 root root 29 5月   9 00:43 cloudera-repos
[root@kino-cdh01 html]# python -m SimpleHTTPServer 8900
Serving HTTP on 0.0.0.0 port 8900 ....

在這裏插入圖片描述
編輯本地yum源配置文件,第一次配置時裏面爲空

[root@kino-cdh01 html]# vim /etc/yum.repos.d/cloudera-manager.repo

[cloudera-manager]
name=cloudera-manager
baseurl=http://kino-cdh01:8900/cloudera-repos/cm6/6.2.1/redhat7/yum/
enabled=1
gpgcheck=0

發給其他所有節點

[root@kino-cdh01 html]# scp -r /etc/yum.repos.d/cloudera-manager.repo root@kino-cdh02:/etc/yum.repos.d
[root@kino-cdh01 html]# scp -r /etc/yum.repos.d/cloudera-manager.repo root@kino-cdh03:/etc/yum.repos.d

安裝CM server及agent
主節點執行:

[root@kino-cdh01 yum.repos.d]# yum -y install cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server

子節點執行:

[root@kino-cdh02 yum.repos.d]# yum -y install cloudera-manager-agent cloudera-manager-daemons

修改CM配置文件
所有節點都要執行

[root@kino-cdh01 yum.repos.d]# vim /etc/cloudera-scm-agent/config.ini

server_host=kino-cdh01  # 改成主節點的ip或hosts

進入mysql數據庫,在MySQL中建庫

[root@kino-cdh01 yum.repos.d]# mysql -uroot -p

CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hive DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

爲CM配置數據庫

/opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm root Kino123.

五、啓動CM服務

啓動主節點的 cloudera-scm-server

[root@kino-cdh01 yum.repos.d]# systemctl start cloudera-scm-server

啓動所有節點(包括主節點)的 cloudera-scm-agent

[root@kino-cdh01 yum.repos.d]#  systemctl start cloudera-scm-agent

查看狀態

[root@kino-cdh01 yum.repos.d]# systemctl status cloudera-scm-server
[root@kino-cdh01 yum.repos.d]# systemctl status cloudera-scm-agent

查看Server啓動日誌

[root@kino-cdh01 yum.repos.d]# tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

當看到如下信息,代表 cloudera-scm-server 已經啓動
在這裏插入圖片描述
啓動後,在瀏覽器中輸入: 主機 或者 IP:7180 ,會看到如下界面:
在這裏插入圖片描述


六、CDH 安裝配置

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
第一個問題解決方法(所有主機都要執行):
先執行(臨時修改):

sysctl vm.swappiness=10
cat /proc/sys/vm/swappiness

再執行(永久修改):

echo 'vm.swappiness=10'>> /etc/sysctl.conf

第二個問題解決方法(所有主機都要執行):

echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo 'echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> /etc/rc.local
echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local

都執行完成後,點擊重新運行
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述


七、Hive On Spark 配置

https://blog.csdn.net/Java_Road_Far/article/details/104899098


關於上述所有步驟,也可以參照如下鏈接進行安裝:
https://www.cnblogs.com/swordfall/p/10816797.html#auto_id_6

其中數據庫的安裝建議按照官網安裝

官網安裝地址爲
https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/cm_ig_reqs_space.html#concept_tjd_4yc_gr

軟硬件環境要求:
各節點:內存推薦16GB及以上,硬盤推薦200GB及以上,網絡通暢


八、NameNode HA

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
等待大概十分鐘,NameNode HA 配置完成


九、修改默認參數配置

Spark的參數修改

將安裝包中的: spark-ext 拷貝(每臺節點都要拷貝)到每臺服務器的: /usr/lib目錄下,並且搜索如下參數添加配置:

spark-conf/spark-env.sh 的 Spark 服務高級配置代碼段(安全閥)

spark-conf/spark-env.sh 的 Spark 客戶端高級配置代碼段(安全閥)

spark-conf/spark-env.sh 的 History Server 高級配置代碼段(安全閥)
在這裏插入圖片描述
在這裏插入圖片描述


十、Phoenix 安裝(所有的節點都要執行)

將 安裝包中的 apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz 拷貝到 /root 目錄下,解壓,將名字換成 phoenix

/root/phoenix 下的 phoenix-core-5.0.0-HBase-2.0.jar 拷貝(不是移動)到 /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/ 目錄下
/usr/lib/spark-ext/lib 下的 htrace-core-3.1.0-incubating.jar 拷貝(不是移動)到 /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/ 目錄下

[root@bigdata1 ~]# cp /root/phoenix/phoenix-core-5.0.0-HBase-2.0.jar /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/

[root@bigdata1 ~]# cp /root/phoenix/phoenix-5.0.0-HBase-2.1-server.jar /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/

[root@bigdata1 ~]# cp /usr/lib/spark-ext/lib/htrace-core-3.1.0-incubating.jar /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/

然後修改這兩個jar的權限爲777

[root@bigdata1 ~]# chmod 777 /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/htrace-core-3.1.0-incubating.jar

[root@bigdata1 ~]# chmod 777 /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/phoenix-5.0.0-HBase-2.1-server.jar

[root@bigdata1 ~]# chmod 777 /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/lib/phoenix-core-5.0.0-HBase-2.0.jar

打開 CDH 界面,修改 HBase 如下兩個參數
在這裏插入圖片描述
hbase-site.xml 的 HBase 服務高級配置代碼段(安全閥)

phoenix.schema.isNamespaceMappingEnabled
true
命名空間開啓

hbase.regionserver.wal.codec
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
二級索引

hbase-site.xml 的 HBase 客戶端高級配置代碼段(安全閥)

phoenix.schema.isNamespaceMappingEnabled 
true

重啓HBase

將 hdfs 和 hbase 相關配置文件拷貝到 phoenix/bin目錄下(所有節點都要執行)

[root@bigdata1 ~]# cp /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hbase/conf/hbase-site.xml /root/phoenix/bin/
cp: overwrite ‘/root/phoenix/bin/hbase-site.xml’? y

[root@bigdata1 ~]# cp /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/etc/hadoop/core-site.xml /root/phoenix/bin/

[root@bigdata1 ~]# cp /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/etc/hadoop/hdfs-site.xml /root/phoenix/bin/

[root@bigdata1 ~]# cp /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/etc/hadoop/yarn-site.xml /root/phoenix/bin/

連接 Phoenix

[root@bigdata1 phoenix]# bin/sqlline.py 

如果出現如下錯誤:

Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG (state=42M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG
 at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
 at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:391)
 at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
 at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
 at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:482)
 at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
 at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
 at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
 at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
 at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1793)
 at org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getColumns(PhoenixDatabaseMetaData.java:589)
 at sqlline.SqlLine.getColumns(SqlLine.java:1103)
 at sqlline.SqlLine.getColumnNames(SqlLine.java:1127)
 at sqlline.SqlCompleter.<init>(SqlCompleter.java:81)
 at sqlline.DatabaseConnection.setCompletions(DatabaseConnection.java:84)
 at sqlline.SqlLine.setCompletions(SqlLine.java:1740)
 at sqlline.Commands.connect(Commands.java:1066)
 at sqlline.Commands.connect(Commands.java:996)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
 at sqlline.SqlLine.dispatch(SqlLine.java:809)
 at sqlline.SqlLine.initArgs(SqlLine.java:588)
 at sqlline.SqlLine.begin(SqlLine.java:661)
 at sqlline.SqlLine.start(SqlLine.java:398)
 at sqlline.SqlLine.main(SqlLine.java:291)
sqlline version 1.2.0

解決方法如下

[root@bigdata1 phoenix]# hbase shell
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.1.0-cdh6.2.1, rUnknown, Wed Sep 11 01:05:56 PDT 2019
Took 0.0020 seconds 
hbase(main):001:0> 
hbase(main):001:0> list
TABLE 
SYSTEM:CATALOG 
SYSTEM:FUNCTION 
SYSTEM:LOG 
SYSTEM:MUTEX 
SYSTEM:SEQUENCE 
SYSTEM:STATS 
6 row(s)
Took 0.3353 seconds 
=> ["SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:LOG", "SYSTEM:MUTEX", "SYSTEM:SEQUENCE", "SYSTEM:STATS"]
hbase(main):002:0> disable 'SYSTEM:CATALOG'
Took 0.8518 seconds 
hbase(main):003:0> snapshot 'SYSTEM:CATALOG', 'cata_tableSnapshot'
Took 0.2592 seconds 
hbase(main):004:0> clone_snapshot 'cata_tableSnapshot', 'SYSTEM.CATALOG'
Took 4.2676 seconds 
hbase(main):005:0> drop 'SYSTEM:CATALOG'
Took 0.2438 seconds 
hbase(main):006:0> quit

然後重啓HBase,重新連接 Phoenix

[root@bigdata1 phoenix]# bin/sqlline.py 
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix: none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/phoenix/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Connected to: Phoenix (version 5.0)
Driver: PhoenixEmbeddedDriver (version 5.0)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:>

10.1 顯示所有表

0: jdbc:phoenix:> !tables
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+--+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | INDEX_STATE | |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+--+
| | SYSTEM | CATALOG | SYSTEM TABLE | | | | | | |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | | | | |
| | SYSTEM | LOG | SYSTEM TABLE | | | | | | |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | | | | |
| | SYSTEM | STATS | SYSTEM TABLE | | | | | | |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+--+
0: jdbc:phoenix:>

10.2 創建表

CREATE TABLE IF NOT EXISTS us_population (
state CHAR(2) NOT NULL,
city VARCHAR NOT NULL,
population BIGINT
CONSTRAINT my_pk PRIMARY KEY (state, city));

10.3 查詢所有表

0: jdbc:phoenix:> !tables
+------------+--------------+----------------+---------------+----------+------------+----------------------------+-----------------+--------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | INDEX_STATE |
+------------+--------------+----------------+---------------+----------+------------+----------------------------+-----------------+--------------+
| | SYSTEM | CATALOG | SYSTEM TABLE | | | | | |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | | | |
| | SYSTEM | LOG | SYSTEM TABLE | | | | | |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | | | |
| | SYSTEM | STATS | SYSTEM TABLE | | | | | |
| | | US_POPULATION | TABLE | | | | | |
+------------+--------------+----------------+---------------+----------+------------+----------------------------+-----------------+--------------+
0: jdbc:phoenix:>

10.4 新增記錄

upsert into us_population values('NY','NewYork',8143197);
upsert into us_population values('CA','Los Angeles',3844829);
upsert into us_population values('IL','Chicago',2842518);

10.5 查詢表


0: jdbc:phoenix:> select * from US_POPULATION;
+--------+--------------+-------------+
| STATE | CITY | POPULATION |
+--------+--------------+-------------+
| CA | Los Angeles | 3844829 |
| IL | Chicago | 2842518 |
| NY | NewYork | 8143197 |
+--------+--------------+-------------+
3 rows selected (0.043 seconds)

10.6 刪除表

0: jdbc:phoenix:> drop table us_population;

10.7 退出

0: jdbc:phoenix:> !quit

十一、hive測試

登陸hue: 下面兩個連接隨便選一個都可以
在這裏插入圖片描述
首次登陸,用戶名: admin 密碼: admin

一定要記住第一次登陸設置的,你填的啥數據庫裏面生成的就是傻,用上面寫的就可以了,登陸進去後,按下圖新增hdfs 用戶

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
然後註銷 admin 用戶,登陸hdfs 用戶

依次執行下面的sql

create table kino(name string, age int);

insert into kino values("kino", 20);

select * from kino where name = "kino" and age = 20;

在這裏插入圖片描述


十二、kafka測試

12.1 創建分區

選擇kafka安裝的一臺服務器,執行如下命令,會看到一堆日誌

[root@bigdata1 ~]# kafka-topics --zookeeper 10.3.4.41:2181 --create --replication-factor 3 --partitions 1 --topic mykafkatest

12.2 生產者往 上面創建的 topic 發送消息

[root@bigdata1 ~]# kafka-console-producer --broker-list 10.3.4.41:9092 --topic mykafkatest

一堆日誌

>鍵盤輸入即可....

12.3 消費者消費 topic 消息

[root@bigdata3 ~]# kafka-console-consumer -bootstrap-server 10.3.4.41:9092 --from-beginning --topic mykafkatest

此時生產者發送,看消費者消費到沒有
在這裏插入圖片描述
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章