CDH6.3配置安裝實操

環境要求

Redhat7.4安裝CDH6.3。CDH6與CDH5的安裝步驟一致,主要包括以下四部分:

1.安全前置準備,包括安裝操作系統、關閉防火牆、同步服務器時鐘等;
2.外部數據庫如MySQL安裝
3.安裝Cloudera Manager;
4.安裝CDH集羣;

請務必注意CDH6的安裝前置條件包括如下: 外部數據庫支持: MySQL 5.7或更高 MariaDB 5.5或更高 PostgreSQL
8.4或更高 Oracle 12c或更高

JDK Oracle JDK1.8,將不再支持JDK1.7

操作系統支持 RHEL 6.8或更高 RHEL 7.2或更高 SLES 12 SP2或更高 Ubuntu 16或更高

本次的測試環境爲
1.CM和CDH版本爲6.3
2.Redhat7.7
3.JDK1.8.0_181
4.MariaDB-5.5.56
5.root用戶安裝

前置條件

hostname及hosts配置:

集羣中各個節點之間能互相通信使用靜態IP地址。IP地址和主機名通過/etc/hosts配置,主機名通過/etc/hostname進行配置。
本文從前面centos7安裝實操配置完成後,克隆了兩臺,計劃搭建三臺機器的CDH

[root@node01 app]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 node01
192.168.1.82 node02
192.168.1.83 node03

ssh_all.sh遠程執行腳本

爲了在一臺機器上執行所有命令,簡單寫一個遠程shell腳本:

[root@node01 shell]# vim ssh_all.sh
#!/bin/sh 
#read -p "commend:" cmd
 for i in {1,2,3}
    do
      ssh "node0"$i $1
    done

保存後退出,並給文件賦上執行權限。

[root@node01 shell]# chmod u+x ssh_all.sh

禁用SElinux:

查看SELinux狀態:

1、/usr/sbin/sestatus -v ##如果SELinux status參數爲enabled即爲開啓狀態

SELinux status: enabled

2、getenforce ##也可以用這個命令檢查

關閉SELinux:

1、臨時關閉(不用重啓機器):

setenforce 0 設置SELinux 成爲permissive模式
setenforce 1 設置SELinux 成爲enforcing模式

2、修改配置文件需要重啓機器:

修改/etc/selinux/config 文件

將SELINUX=enforcing改爲SELINUX=disabled

重啓機器即可

所有節點執行

[root@node01 ~]# setenforce 0

修改配置文件/etc/selinux/config


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

關閉防火牆:

[root@node01 shell]# ./ssh_all.sh "systemctl stop firewalld"
root@node01's password: 
root@node02's password: 
1root@node03's password: 
[root@node01 shell]# ./ssh_all.sh "systemctl disable firewalld"
root@node01's password: 
root@node02's password: 
root@node03's password: 
[root@node01 shell]# ./ssh_all.sh "systemctl status firewalld"
root@node01's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
root@node02's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
root@node03's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

配置集羣之間SSH

首先在每一臺機器上生成公鑰:

[root@node01 shell]# ./ssh_all.sh ssh-keygen -t rsa
root@node01's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3Ruea4VORreNw15CkwqFQGtLmcrfdTzPXUAiPlvTy5M root@node01
The key's randomart image is:
+---[RSA 2048]----+
|       .o.... .  |
|         =...+   |
|        * o.o o. |
|     . + o.=.+=+ |
|      o S oo+*E=.|
|       . . o==B==|
|        . .++o ++|
|            o..  |
|           ..    |
+----[SHA256]-----+
root@node02's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:lWpt2wSPqPq7YvRdgtJs6QWG6PbRuym6nMgMpoI87DM root@node02
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|           .     |
|   . .    +      |
|  . . o  = +     |
| .   = +S + o    |
|  o + Boo..+     |
|=o o *.+ o. .    |
|OE .=.+..        |
|=+O+.+*+         |
+----[SHA256]-----+
root@node03's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/rNvsvYvqpr6+cugBEoauyrDNpq2aC37aZ8XQwlMp/w root@node03
The key's randomart image is:
+---[RSA 2048]----+
|    o. .         |
|    .oo          |
|     o. .        |
|      .o         |
|...   .ES        |
|o+ .   +         |
|= . . . +        |
|+X +...* .+ o    |
|&+Bo+=*o=+=Xoo.  |
+----[SHA256]-----+

將主節點上的公鑰複製一份命名爲authorized_keys:

[root@node01 shell]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

在其他兩個節點上執行

[root@node02 /]# cat ~/.ssh/id_rsa.pub |ssh root@node01 'cat >> ~/.ssh/authorized_keys'
[root@node03 ~]# cat ~/.ssh/id_rsa.pub |ssh root@node01 'cat >> ~/.ssh/authorized_keys'

在主節點執行

[root@node01 shell]# scp ~/.ssh/authorized_keys root@node02:~/.ssh/
root@node02's password: 
authorized_keys                                                                                                                          100% 1179     1.1MB/s   00:00    
[root@node01 shell]# scp ~/.ssh/authorized_keys root@node03:~/.ssh/
root@node03's password: 
authorized_keys 
[root@node01 shell]# ./ssh_all.sh "chmod 600 ~/.ssh/authorized_keys"

配置完成後從各節點登錄試試。

集羣時鐘同步

在Redhat7.x的操作系統上,默認的安裝chrony,先卸載chrony,然後安裝ntp。
使用ntp來配置各臺機器的時鐘同步,將cm(node01)服務作爲本地ntp服務器,其它3臺服務器與其保持同步。
列出

[root@node01 shell]# yum list chrony
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
Installed Packages
chrony.x86_64                                                                      3.4-1.el7                                                                      @anaconda

卸載

[root@node01 shell]# yum remove chrony

可以查看chrony的依賴

[root@node01 shell]# yum replist chrony

所有機器安裝ntp服務

[root@node01 shell]# ./ssh_all.sh "yum install -y ntp"
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
Package ntp-4.2.6p5-29.el7.centos.x86_64 already installed and latest version
Nothing to do

看起來Centos7.7已經安裝好了ntp。

[root@node01 shell]# vim /etc/ntp.conf

主節點:

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# 外部時間server不可用時,以本地時間作爲時間服務
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

其他節點:

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

server node01

所有機器重啓ntp服務

[root@node01 shell]# ./ssh_all.sh "systemctl restart ntpd"
[root@node01 shell]# ./ssh_all.sh "systemctl enable ntpd"
[root@node01 shell]# ./ssh_all.sh "systemctl status ntpd"

驗證,左邊出現*號,證明同步成功。

[root@node01 shell]# ./ssh_all.sh "ntpq -p"
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*LOCAL(0)        .LOCL.          10 l   42   64    1    0.000    0.000   0.000
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 node01          LOCAL(0)        11 u   43   64    1    0.597  -269.62   0.000
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 node01          LOCAL(0)        11 u   42   64    1    0.402  190.141   0.000

swap配置

檢查虛擬內存需求率

[root@node01 shell]# ./ssh_all.sh "cat /proc/sys/vm/swappiness"
30
30
30

降低虛擬內訓需求率

[root@node01 shell]# ./ssh_all.sh "sysctl -a|grep vm.swappiness"
[root@node01 shell]# ./ssh_all.sh "echo 1 > /proc/sys/vm/swappiness"
[root@node01 shell]# ./ssh_all.sh "sysctl -a |grep vm.swappiness"

爲所有機器永久設置swap爲1,修改/etc/sysctl.conf中vm.swappiness爲1,沒有則新增。

# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
vm.swappiness=1

設置透明大頁面

爲何要關閉透明大頁面,oracle的回答:

由於透明超大頁面已知會導致意外的節點重新啓動並導致RAC出現性能問題,因此Oracle強烈建議禁用透明超大頁面。
另外,即使在單實例數據庫環境中,透明超大頁面也可能會導致問題,並出現意外的性能問題或延遲。
因此,Oracle建議在運行Oracle的所有數據庫服務器上禁用透明超大頁面

透明大頁塊:可以動態將系統默認內存葉塊4kb,交換爲Huge pages,在這個過程中,對於操作系統的內存的各種分配活動,都需要各種內存鎖,直接影響程序的內存訪問性能,這個過程對應用是透明的,在應用層面不可控制,對於專門優化4K page優化的程序來說,造成隨機的性能下降問題。
------簡單來說,就是關閉後可以避免性能下降,具體日後在研究。

所有節點執行以下命令關閉透明大頁面,並即時生效

[root@node01 shell]# ./ssh_all.sh "echo never > /sys/kernel/mm/transparent_hugepage/defrag"
[root@node01 shell]# ./ssh_all.sh "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
[root@node01 shell]# ./ssh_all.sh "cat /sys/kernel/mm/transparent_hugepage/enabled"
always madvise [never]
always madvise [never]
always madvise [never]
[root@node01 shell]# ./ssh_all.sh "cat /sys/kernel/mm/transparent_hugepage/defrag"
always madvise [never]
always madvise [never]
always madvise [never]

修改所有節點的/etc/rc.d/rc.local文件的權限以實現開機執行

[root@node01 shell]# ./ssh_all.sh "chmod u+x /etc/rc.d/rc.local"

在所有節點的/etc/rc.d/rc.local文件中新增如下內容,以實現開機自動關閉透明大頁面。

#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; 
	then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi 
if test -f /sys/kernel/mm/transparent_hugepage/defrag;
	then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi

配置操作系統repo

Fayson用的是AWS的環境,這步是可以省略的,放在這裏供物理機部署的兄弟們參考。

  • 掛載操作系統iso文件
[root@node01 shell]# ./ssh_all.sh "mkdir /media/DVD1"
[root@node01 shell]# ./ssh_all.sh "mount -o loop /dev/cdrom /media/DVD1"

查看

[root@node01 shell]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G  4.7G   74G   6% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M   12K  378M   1% /run/user/42
tmpfs           378M     0  378M   0% /run/user/0
/dev/loop0      4.4G  4.4G     0 100% /media/DVD1

配置操作系統repo

[root@node01 shell]# vim /etc/yum.repos.d/local_os.repo
[local_iso]
name=CentOS-$releasever - Media
baseurl=file:///media/DVD1
gpgcheck=0
enabled=1

分發給所有機器:

[root@node01 shell]# scp /etc/yum.repos.d/local_os.repo root@node02:/etc/yum.repos.d/
local_os.repo                                                                                                                            100%   96   104.9KB/s   00:00    
[root@node01 shell]# scp /etc/yum.repos.d/local_os.repo root@node03:/etc/yum.repos.d/
local_os.repo                                                                                                                            100%   96    84.6KB/s   00:00   
[root@node01 shell]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
local_iso                                                                                                                                           | 3.6 kB  00:00:00     
(1/2): local_iso/group_gz                                                                                                                           | 165 kB  00:00:00     
(2/2): local_iso/primary_db                                                                                                                         | 3.2 MB  00:00:00     
repo id                                                                          repo name                                                                           status
base/7/x86_64                                                                    CentOS-7 - Base                                                                     10,097
extras/7/x86_64                                                                  CentOS-7 - Extras                                                                      341
local_iso                                                                        CentOS-7 - Media                                                                     4,067
updates/7/x86_64                                                                 CentOS-7 - Updates                                                                   1,787
repolist: 16,292
[root@node01 shell]# 

安裝httpd服務

安裝httpd

[root@node01 shell]# ./ssh_all.sh "yum -y install httpd"

啓動httpd

[root@node01 shell]# 
 * [ ] ./ssh_all.sh "systemctl start httpd"

安裝完httpd,重新制作操作系統repo,換成http的方式方便其它服務器也可以訪問

[root@node01 shell]# ./ssh_all.sh "scp -r /media/DVD1/* /var/www/html/iso/"
[root@node01 shell]# vim /etc/yum.repos.d/os.repo
[osrepo]
name=os_repo
baseurl=http://node01/iso/
enabled=true
gpgcheck=false

如果不需要將每臺機器都作爲http訪問源,只在主節點配置。

修改/etc/httpd/conf/httpd.conf配置文件,在中修改以下內容

AddType application/x-gzip .gz .tgz .parcel
<IfModule mime_module>
    #
    # TypesConfig points to the file containing the list of mappings from
    # filename extension to MIME-type.
    #
    TypesConfig /etc/mime.types
    #
    # AddType allows you to add to or override the MIME configuration
    # file specified in TypesConfig for specific file types.
    #
    #AddType application/x-gzip .tgz
    #
    # AddEncoding allows you to have certain browsers uncompress
    # information on the fly. Note: Not all browsers support this.
    #
    #AddEncoding x-compress .Z
    #AddEncoding x-gzip .gz .tgz
    #
    # If the AddEncoding directives above are commented-out, then you
    # probably should define those extensions to indicate media types:
    #
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz .parcel

保存httpd.conf,重啓httpd服務

[root@node01 shell]# ./ssh_all.sh "systemctl restart httpd"

驗證,打開瀏覽器訪問
在這裏插入圖片描述

安裝MariaDB

安裝MariaDB

[root@node01 shell]# yum -y install mariadb
[root@node01 shell]# yum -y install mariadb-server

啓動並配置MariaDB

[root@node01 shell]# systemctl start mariadb
[root@node01 shell]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.

這裏一路Y,除了禁止遠程訪問選n。

[root@node01 shell]# /usr/bin/mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

建立CM,Hive等需要的表

[root@node01 shell]# mysql -u root -p
MariaDB [(none)]> create database metastore default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'hive'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON metastore. * TO 'hive'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database cm default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'cm'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cm. * TO 'cm'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database am default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'am'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON am. * TO 'am'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database rm default character set utf8;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> CREATE USER 'rm'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON rm. * TO 'rm'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database hue default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'hue'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON hue. * TO 'hue'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database oozie default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'oozie'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON oozie. * TO 'oozie'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database sentry default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'sentry'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON sentry. * TO 'sentry'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database nav_ms default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'nav_ms'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nav_ms. * TO 'nav_ms'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database nav_as default character set utf8;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> CREATE USER 'nav_as'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nav_as. * TO 'nav_as'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]>

安裝JDBC驅動

[root@node01 soft]# mkdir -p /usr/share/java/
[root@node01 soft]# mv mysql-connector-java-5.1.38.jar /usr/share/java/
[root@node01 soft]# cd /usr/share/java
[root@node01 java]# ls
icedtea-web.jar  icedtea-web-plugin.jar  jline.jar  js.jar  mysql-connector-java-5.1.38.jar  rhino-examples.jar  rhino.jar  tagsoup.jar
[root@node01 java]# chmod 777 mysql-connector-java-5.1.38.jar 
[root@node01 java]# ln -s mysql-connector-java-5.1.38.jar mysql-connector-java.jar
[root@node01 java]# ll
total 2196
lrwxrwxrwx. 1 root  root       23 Apr  8 11:38 icedtea-web.jar -> ../icedtea-web/netx.jar
lrwxrwxrwx. 1 root  root       25 Apr  8 11:38 icedtea-web-plugin.jar -> ../icedtea-web/plugin.jar
-rw-r--r--. 1 root  root    62891 Jun 10  2014 jline.jar
-rw-r--r--. 1 root  root  1079759 Aug  2  2017 js.jar
-rwxrwxrwx. 1 grant grant  983911 Apr 13 16:32 mysql-connector-java-5.1.38.jar
lrwxrwxrwx. 1 root  root       31 Apr 13 16:37 mysql-connector-java.jar -> mysql-connector-java-5.1.38.jar
-rw-r--r--. 1 root  root    18387 Aug  2  2017 rhino-examples.jar
lrwxrwxrwx. 1 root  root        6 Apr  8 11:37 rhino.jar -> js.jar
-rw-r--r--. 1 root  root    92284 Mar  6  2015 tagsoup.jar

Cloudera Manager安裝

配置本地repo源

下載CM6.3的安裝包,地址爲

https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/allkeys.asc
[root@node01 cm6.3]# ll
total 1378004
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

下載CDH6.3的安裝包,地址爲

https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha1
https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha256
https://archive.cloudera.com/cdh6/6.3.0/parcels/manifest.json
[root@node01 cdh6.3]# ll
total 2036852
-rw-r--r--. 1 root root 2085690155 Apr 14 14:49 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r--. 1 root root         40 Apr 14 14:46 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
-rw-r--r--. 1 root root      33887 Apr 14 14:46 manifest.json

將Cloudera Manager安裝需要的6個rpm包以及一個asc文件下載到本地,放在同一目錄,執行createrepo命令生成rpm元數據。

[root@node01 cm6.3]# createrepo .
Spawning worker 0 with 3 pkgs
Spawning worker 1 with 3 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[root@node01 cm6.3]# ll
total 1378008
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
drwxr-xr-x. 2 root root       4096 Apr 14 16:50 repodata

配置web服務器
將上述cdh6.3/cm6.3目錄移動到/var/www/html目錄下, 使得用戶可以通過HTTP訪問這些rpm包。

[root@node01 soft]# mv cm6.3/ cdh6.3/ /var/www/html/
[root@node01 soft]# ll  /var/www/html/*6.3
/var/www/html/cdh6.3:
total 2036852
-rw-r--r--. 1 root root 2085690155 Apr 14 14:49 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r--. 1 root root         40 Apr 14 14:46 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
-rw-r--r--. 1 root root      33887 Apr 14 14:46 manifest.json

/var/www/html/cm6.3:
total 1378008
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
drwxr-xr-x. 2 root root       4096 Apr 14 16:50 repodata

驗證瀏覽器能否正常訪問
在這裏插入圖片描述
發現Forbidden,訪問失敗,網上搜索很多說需要修改httpd.conf的根文件的訪問權限,把deny改爲allowed之類的,我發現我自己的沒有問題。遂查看selinux狀態

[root@node01 shell]# getenforce
permissive

發現自己禁用selinux後沒有重啓機器,重啓之後訪問
在這裏插入圖片描述
在這裏插入圖片描述
製作Cloudera Manager的repo源
進入yum.repos.d目錄,執行

[root@node01 shell]# vim /etc/yum.repos.d/cm.repo
[cmrepo]
name = cm_repo
baseurl = http://node01/cm6.3
enable = true
gpgcheck = false
[root@node01 shell]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
base                                                                                                                                                | 3.6 kB  00:00:00     
cmrepo                                                                                                                                              | 2.9 kB  00:00:00     
extras                                                                                                                                              | 2.9 kB  00:00:00     
updates                                                                                                                                             | 2.9 kB  00:00:00     
cmrepo/primary_db                                                                                                                                   | 8.6 kB  00:00:00     
repo id                                                                          repo name                                                                           status
base/7/x86_64                                                                    CentOS-7 - Base                                                                     10,097
cmrepo                                                                           cm_repo                                                                                  6
extras/7/x86_64                                                                  CentOS-7 - Extras                                                                      341
local_iso                                                                        CentOS-7 - Media                                                                     4,067
osrepo                                                                           os_repo                                                                              4,067
updates/7/x86_64                                                                 CentOS-7 - Updates                                                                   1,787
repolist: 20,365

驗證安裝JDK

[root@node01 shell]# yum -y install oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

安裝Cloudera Manager Server

通過yum安裝Cloudera Manager Server

[root@node01 shell]# yum -y install cloudera-manager-server

初始化數據庫

[root@node01 shell]# /opt/cloudera/cm/schema/scm_prepare_database.sh mysql cm cm 123
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.8.0_181-cloudera/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
2020-04-16 10:31:07,354 [main] INFO  com.cloudera.enterprise.dbutil.DbCommandExecutor  - Successfully connected to database.
All done, your SCM database is configured correctly!

啓動Cloudera Manager Server

[root@node01 shell]# systemctl start cloudera-scm-server
[root@node01 shell]# systemctl status cloudera-scm-server
● cloudera-scm-server.service - Cloudera CM Server Service
   Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-04-16 10:32:00 CST; 32s ago
 Main PID: 3116 (java)
    Tasks: 24
   CGroup: /system.slice/cloudera-scm-server.service
           └─3116 /usr/java/jdk1.8.0_181-cloudera/bin/java -cp .:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/post...

Apr 16 10:32:00 node01 systemd[1]: Started Cloudera CM Server Service.
Apr 16 10:32:00 node01 cm-server[3116]: JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Apr 16 10:32:00 node01 cm-server[3116]: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Apr 16 10:32:03 node01 cm-server[3116]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the consol...on logging.
Apr 16 10:32:13 node01 cm-server[3116]: 10:32:13.695 [main] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Table 'cm.CM_VERSION' doesn't exist
Hint: Some lines were ellipsized, use -l to show in full.

檢查端口是否監聽

[root@node01 shell]# netstat -lnpt |grep 7180
tcp        0      0 0.0.0.0:7180            0.0.0.0:*               LISTEN      3116/java    

通過http://cmip:7180/cmf/login訪問CM
在這裏插入圖片描述

安裝CDH

admin/admin登錄到CM
在這裏插入圖片描述
同意協議
在這裏插入圖片描述
此處我選擇了免費版本
在這裏插入圖片描述
點擊“繼續”,輸入集羣名稱,此處我用“My Cluster01”。
在這裏插入圖片描述
輸入主機IP或者名稱,點擊搜索找到主機後點擊繼續
在這裏插入圖片描述
這一步搜索主機感覺比Ambari+HDP稍好一點,Ambari會對域名的格式要求很嚴格,例如xxx.xxx.xxx格式。
在這裏爲了宿主機訪問方便,將域名映射添加到本地,編輯宿主機的C:\Windows\System32\drivers\etc\hosts文件,添加
在這裏插入圖片描述
選擇自定義存儲庫,輸入cm的http地址
在這裏插入圖片描述
點更多選項,刪除掉其他的url
在這裏插入圖片描述
繼續
在這裏插入圖片描述
配置ssh賬號和密碼
在這裏插入圖片描述
繼續,安裝,沒想到這步報了錯,安裝失敗,查看日誌,前面我做了local_os.repo鏡像,但是重啓了機器之後掛載的DVD1文件不見了

[root@node01 yum.repos.d]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G   24G   55G  30% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M   12K  378M   1% /run/user/42
tmpfs           378M     0  378M   0% /run/user/0

遂重新掛載

[root@node01 var]# mount -o loop /dev/cdrom /media/DVD1
[root@node01 var]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G   24G   55G  30% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M     0  378M   0% /run/user/0
tmpfs           378M   32K  378M   1% /run/user/1000
/dev/sr0        4.4G  4.4G     0 100% /run/media/grant/CentOS 7 x86_64
/dev/loop0      4.4G  4.4G     0 100% /media/DVD1

makecache一下試試

[root@node01 var]# yum makecache

三臺都掛載了則都需掛載,設置開機自動掛載。

[root@node01 yum.repos.d]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Apr  8 11:32:44 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=9a933bb8-c407-4a73-b19d-bec424ee2420 /                       ext4    defaults        1 1
UUID=5787528a-3e3d-49ba-91dd-261678552c5e /boot                   ext4    defaults        1 2
UUID=475405cb-8591-45fb-a19e-79ae02783a5e swap                    swap    defaults        0 0
/dev/cdrom /media/DVD1 default,loop 0 0

保存後執行激活自動掛載

[root@node01 yum.repos.d]# mount -a

其實沒製作local_os.repo就沒問題,物理機需要製作,虛擬機自己玩沒必要。所以可以將local_os.repo放入備份目錄裏,然後yum clean all。
沒問題,繼續嘗試安裝。
在這裏插入圖片描述
如果有錯誤或者黃色警告,查看“顯示檢查器結果”,並逐項解決,然後“重新運行”檢查,直到所有的檢查都通過,否則沒辦法點擊繼續下一步。
果然有問題,查看日誌
在這裏插入圖片描述
發現node03的透明大頁面沒有關閉,是因爲重啓機器之後沒有生效,沒有改變rc.local的運行權限
每臺機器都要做:

[root@node03 yum.repos.d]# ll /etc/rc.d/rc.local
-rwxr-xr-x. 1 root root 713 Apr 16 15:37 /etc/rc.d/rc.local

在這裏插入圖片描述

集羣設置安裝嚮導

可以自定義服務,這裏選擇Data Engineering,裏面有spark
在這裏插入圖片描述
怕node01負載太重,將node02和node03作爲datanode,zookeeper也三臺都安裝。
注意:Activity Monitor和Telemetry Publisher不用選擇任何主機,留空,即不安裝,因爲用不到。
在這裏插入圖片描述
點擊“繼續”,進入下一步,測試數據庫連接
在這裏插入圖片描述
測試成功,點擊“繼續”,進入目錄設置,此處使用默認默認目錄,根據實際情況進行目錄修改,我這裏沒有修改,全部按照默認。
在這裏插入圖片描述
過程中報錯,說namenode格式化問題,遂在主節點執行

[root@node01 10-hdfs-NAMENODE-format]# rm -rf /dfs/nn

從節點:

[root@node02 rh]# rm -rf /dfs/dn

在這裏插入圖片描述
至此,配置完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章