CDH6.3配置安装实操

环境要求

Redhat7.4安装CDH6.3。CDH6与CDH5的安装步骤一致,主要包括以下四部分:

1.安全前置准备,包括安装操作系统、关闭防火墙、同步服务器时钟等;
2.外部数据库如MySQL安装
3.安装Cloudera Manager;
4.安装CDH集群;

请务必注意CDH6的安装前置条件包括如下: 外部数据库支持: MySQL 5.7或更高 MariaDB 5.5或更高 PostgreSQL
8.4或更高 Oracle 12c或更高

JDK Oracle JDK1.8,将不再支持JDK1.7

操作系统支持 RHEL 6.8或更高 RHEL 7.2或更高 SLES 12 SP2或更高 Ubuntu 16或更高

本次的测试环境为
1.CM和CDH版本为6.3
2.Redhat7.7
3.JDK1.8.0_181
4.MariaDB-5.5.56
5.root用户安装

前置条件

hostname及hosts配置:

集群中各个节点之间能互相通信使用静态IP地址。IP地址和主机名通过/etc/hosts配置,主机名通过/etc/hostname进行配置。
本文从前面centos7安装实操配置完成后,克隆了两台,计划搭建三台机器的CDH

[root@node01 app]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 node01
192.168.1.82 node02
192.168.1.83 node03

ssh_all.sh远程执行脚本

为了在一台机器上执行所有命令,简单写一个远程shell脚本:

[root@node01 shell]# vim ssh_all.sh
#!/bin/sh 
#read -p "commend:" cmd
 for i in {1,2,3}
    do
      ssh "node0"$i $1
    done

保存后退出,并给文件赋上执行权限。

[root@node01 shell]# chmod u+x ssh_all.sh

禁用SElinux:

查看SELinux状态:

1、/usr/sbin/sestatus -v ##如果SELinux status参数为enabled即为开启状态

SELinux status: enabled

2、getenforce ##也可以用这个命令检查

关闭SELinux:

1、临时关闭(不用重启机器):

setenforce 0 设置SELinux 成为permissive模式
setenforce 1 设置SELinux 成为enforcing模式

2、修改配置文件需要重启机器:

修改/etc/selinux/config 文件

将SELINUX=enforcing改为SELINUX=disabled

重启机器即可

所有节点执行

[root@node01 ~]# setenforce 0

修改配置文件/etc/selinux/config


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

关闭防火墙:

[root@node01 shell]# ./ssh_all.sh "systemctl stop firewalld"
root@node01's password: 
root@node02's password: 
1root@node03's password: 
[root@node01 shell]# ./ssh_all.sh "systemctl disable firewalld"
root@node01's password: 
root@node02's password: 
root@node03's password: 
[root@node01 shell]# ./ssh_all.sh "systemctl status firewalld"
root@node01's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
root@node02's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
root@node03's password: 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

配置集群之间SSH

首先在每一台机器上生成公钥:

[root@node01 shell]# ./ssh_all.sh ssh-keygen -t rsa
root@node01's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3Ruea4VORreNw15CkwqFQGtLmcrfdTzPXUAiPlvTy5M root@node01
The key's randomart image is:
+---[RSA 2048]----+
|       .o.... .  |
|         =...+   |
|        * o.o o. |
|     . + o.=.+=+ |
|      o S oo+*E=.|
|       . . o==B==|
|        . .++o ++|
|            o..  |
|           ..    |
+----[SHA256]-----+
root@node02's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:lWpt2wSPqPq7YvRdgtJs6QWG6PbRuym6nMgMpoI87DM root@node02
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|           .     |
|   . .    +      |
|  . . o  = +     |
| .   = +S + o    |
|  o + Boo..+     |
|=o o *.+ o. .    |
|OE .=.+..        |
|=+O+.+*+         |
+----[SHA256]-----+
root@node03's password: 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/rNvsvYvqpr6+cugBEoauyrDNpq2aC37aZ8XQwlMp/w root@node03
The key's randomart image is:
+---[RSA 2048]----+
|    o. .         |
|    .oo          |
|     o. .        |
|      .o         |
|...   .ES        |
|o+ .   +         |
|= . . . +        |
|+X +...* .+ o    |
|&+Bo+=*o=+=Xoo.  |
+----[SHA256]-----+

将主节点上的公钥复制一份命名为authorized_keys:

[root@node01 shell]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

在其他两个节点上执行

[root@node02 /]# cat ~/.ssh/id_rsa.pub |ssh root@node01 'cat >> ~/.ssh/authorized_keys'
[root@node03 ~]# cat ~/.ssh/id_rsa.pub |ssh root@node01 'cat >> ~/.ssh/authorized_keys'

在主节点执行

[root@node01 shell]# scp ~/.ssh/authorized_keys root@node02:~/.ssh/
root@node02's password: 
authorized_keys                                                                                                                          100% 1179     1.1MB/s   00:00    
[root@node01 shell]# scp ~/.ssh/authorized_keys root@node03:~/.ssh/
root@node03's password: 
authorized_keys 
[root@node01 shell]# ./ssh_all.sh "chmod 600 ~/.ssh/authorized_keys"

配置完成后从各节点登录试试。

集群时钟同步

在Redhat7.x的操作系统上,默认的安装chrony,先卸载chrony,然后安装ntp。
使用ntp来配置各台机器的时钟同步,将cm(node01)服务作为本地ntp服务器,其它3台服务器与其保持同步。
列出

[root@node01 shell]# yum list chrony
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
Installed Packages
chrony.x86_64                                                                      3.4-1.el7                                                                      @anaconda

卸载

[root@node01 shell]# yum remove chrony

可以查看chrony的依赖

[root@node01 shell]# yum replist chrony

所有机器安装ntp服务

[root@node01 shell]# ./ssh_all.sh "yum install -y ntp"
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
Package ntp-4.2.6p5-29.el7.centos.x86_64 already installed and latest version
Nothing to do

看起来Centos7.7已经安装好了ntp。

[root@node01 shell]# vim /etc/ntp.conf

主节点:

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# 外部时间server不可用时,以本地时间作为时间服务
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

其他节点:

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

server node01

所有机器重启ntp服务

[root@node01 shell]# ./ssh_all.sh "systemctl restart ntpd"
[root@node01 shell]# ./ssh_all.sh "systemctl enable ntpd"
[root@node01 shell]# ./ssh_all.sh "systemctl status ntpd"

验证,左边出现*号,证明同步成功。

[root@node01 shell]# ./ssh_all.sh "ntpq -p"
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*LOCAL(0)        .LOCL.          10 l   42   64    1    0.000    0.000   0.000
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 node01          LOCAL(0)        11 u   43   64    1    0.597  -269.62   0.000
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 node01          LOCAL(0)        11 u   42   64    1    0.402  190.141   0.000

swap配置

检查虚拟内存需求率

[root@node01 shell]# ./ssh_all.sh "cat /proc/sys/vm/swappiness"
30
30
30

降低虚拟内训需求率

[root@node01 shell]# ./ssh_all.sh "sysctl -a|grep vm.swappiness"
[root@node01 shell]# ./ssh_all.sh "echo 1 > /proc/sys/vm/swappiness"
[root@node01 shell]# ./ssh_all.sh "sysctl -a |grep vm.swappiness"

为所有机器永久设置swap为1,修改/etc/sysctl.conf中vm.swappiness为1,没有则新增。

# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
vm.swappiness=1

设置透明大页面

为何要关闭透明大页面,oracle的回答:

由于透明超大页面已知会导致意外的节点重新启动并导致RAC出现性能问题,因此Oracle强烈建议禁用透明超大页面。
另外,即使在单实例数据库环境中,透明超大页面也可能会导致问题,并出现意外的性能问题或延迟。
因此,Oracle建议在运行Oracle的所有数据库服务器上禁用透明超大页面

透明大页块:可以动态将系统默认内存叶块4kb,交换为Huge pages,在这个过程中,对于操作系统的内存的各种分配活动,都需要各种内存锁,直接影响程序的内存访问性能,这个过程对应用是透明的,在应用层面不可控制,对于专门优化4K page优化的程序来说,造成随机的性能下降问题。
------简单来说,就是关闭后可以避免性能下降,具体日后在研究。

所有节点执行以下命令关闭透明大页面,并即时生效

[root@node01 shell]# ./ssh_all.sh "echo never > /sys/kernel/mm/transparent_hugepage/defrag"
[root@node01 shell]# ./ssh_all.sh "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
[root@node01 shell]# ./ssh_all.sh "cat /sys/kernel/mm/transparent_hugepage/enabled"
always madvise [never]
always madvise [never]
always madvise [never]
[root@node01 shell]# ./ssh_all.sh "cat /sys/kernel/mm/transparent_hugepage/defrag"
always madvise [never]
always madvise [never]
always madvise [never]

修改所有节点的/etc/rc.d/rc.local文件的权限以实现开机执行

[root@node01 shell]# ./ssh_all.sh "chmod u+x /etc/rc.d/rc.local"

在所有节点的/etc/rc.d/rc.local文件中新增如下内容,以实现开机自动关闭透明大页面。

#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; 
	then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi 
if test -f /sys/kernel/mm/transparent_hugepage/defrag;
	then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi

配置操作系统repo

Fayson用的是AWS的环境,这步是可以省略的,放在这里供物理机部署的兄弟们参考。

  • 挂载操作系统iso文件
[root@node01 shell]# ./ssh_all.sh "mkdir /media/DVD1"
[root@node01 shell]# ./ssh_all.sh "mount -o loop /dev/cdrom /media/DVD1"

查看

[root@node01 shell]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G  4.7G   74G   6% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M   12K  378M   1% /run/user/42
tmpfs           378M     0  378M   0% /run/user/0
/dev/loop0      4.4G  4.4G     0 100% /media/DVD1

配置操作系统repo

[root@node01 shell]# vim /etc/yum.repos.d/local_os.repo
[local_iso]
name=CentOS-$releasever - Media
baseurl=file:///media/DVD1
gpgcheck=0
enabled=1

分发给所有机器:

[root@node01 shell]# scp /etc/yum.repos.d/local_os.repo root@node02:/etc/yum.repos.d/
local_os.repo                                                                                                                            100%   96   104.9KB/s   00:00    
[root@node01 shell]# scp /etc/yum.repos.d/local_os.repo root@node03:/etc/yum.repos.d/
local_os.repo                                                                                                                            100%   96    84.6KB/s   00:00   
[root@node01 shell]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
local_iso                                                                                                                                           | 3.6 kB  00:00:00     
(1/2): local_iso/group_gz                                                                                                                           | 165 kB  00:00:00     
(2/2): local_iso/primary_db                                                                                                                         | 3.2 MB  00:00:00     
repo id                                                                          repo name                                                                           status
base/7/x86_64                                                                    CentOS-7 - Base                                                                     10,097
extras/7/x86_64                                                                  CentOS-7 - Extras                                                                      341
local_iso                                                                        CentOS-7 - Media                                                                     4,067
updates/7/x86_64                                                                 CentOS-7 - Updates                                                                   1,787
repolist: 16,292
[root@node01 shell]# 

安装httpd服务

安装httpd

[root@node01 shell]# ./ssh_all.sh "yum -y install httpd"

启动httpd

[root@node01 shell]# 
 * [ ] ./ssh_all.sh "systemctl start httpd"

安装完httpd,重新制作操作系统repo,换成http的方式方便其它服务器也可以访问

[root@node01 shell]# ./ssh_all.sh "scp -r /media/DVD1/* /var/www/html/iso/"
[root@node01 shell]# vim /etc/yum.repos.d/os.repo
[osrepo]
name=os_repo
baseurl=http://node01/iso/
enabled=true
gpgcheck=false

如果不需要将每台机器都作为http访问源,只在主节点配置。

修改/etc/httpd/conf/httpd.conf配置文件,在中修改以下内容

AddType application/x-gzip .gz .tgz .parcel
<IfModule mime_module>
    #
    # TypesConfig points to the file containing the list of mappings from
    # filename extension to MIME-type.
    #
    TypesConfig /etc/mime.types
    #
    # AddType allows you to add to or override the MIME configuration
    # file specified in TypesConfig for specific file types.
    #
    #AddType application/x-gzip .tgz
    #
    # AddEncoding allows you to have certain browsers uncompress
    # information on the fly. Note: Not all browsers support this.
    #
    #AddEncoding x-compress .Z
    #AddEncoding x-gzip .gz .tgz
    #
    # If the AddEncoding directives above are commented-out, then you
    # probably should define those extensions to indicate media types:
    #
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz .parcel

保存httpd.conf,重启httpd服务

[root@node01 shell]# ./ssh_all.sh "systemctl restart httpd"

验证,打开浏览器访问
在这里插入图片描述

安装MariaDB

安装MariaDB

[root@node01 shell]# yum -y install mariadb
[root@node01 shell]# yum -y install mariadb-server

启动并配置MariaDB

[root@node01 shell]# systemctl start mariadb
[root@node01 shell]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.

这里一路Y,除了禁止远程访问选n。

[root@node01 shell]# /usr/bin/mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

建立CM,Hive等需要的表

[root@node01 shell]# mysql -u root -p
MariaDB [(none)]> create database metastore default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'hive'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON metastore. * TO 'hive'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database cm default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'cm'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cm. * TO 'cm'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database am default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'am'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON am. * TO 'am'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database rm default character set utf8;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> CREATE USER 'rm'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON rm. * TO 'rm'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database hue default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'hue'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON hue. * TO 'hue'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database oozie default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'oozie'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON oozie. * TO 'oozie'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database sentry default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'sentry'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON sentry. * TO 'sentry'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database nav_ms default character set utf8;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE USER 'nav_ms'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nav_ms. * TO 'nav_ms'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> create database nav_as default character set utf8;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> CREATE USER 'nav_as'@'%' IDENTIFIED BY '123';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nav_as. * TO 'nav_as'@'%';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]>

安装JDBC驱动

[root@node01 soft]# mkdir -p /usr/share/java/
[root@node01 soft]# mv mysql-connector-java-5.1.38.jar /usr/share/java/
[root@node01 soft]# cd /usr/share/java
[root@node01 java]# ls
icedtea-web.jar  icedtea-web-plugin.jar  jline.jar  js.jar  mysql-connector-java-5.1.38.jar  rhino-examples.jar  rhino.jar  tagsoup.jar
[root@node01 java]# chmod 777 mysql-connector-java-5.1.38.jar 
[root@node01 java]# ln -s mysql-connector-java-5.1.38.jar mysql-connector-java.jar
[root@node01 java]# ll
total 2196
lrwxrwxrwx. 1 root  root       23 Apr  8 11:38 icedtea-web.jar -> ../icedtea-web/netx.jar
lrwxrwxrwx. 1 root  root       25 Apr  8 11:38 icedtea-web-plugin.jar -> ../icedtea-web/plugin.jar
-rw-r--r--. 1 root  root    62891 Jun 10  2014 jline.jar
-rw-r--r--. 1 root  root  1079759 Aug  2  2017 js.jar
-rwxrwxrwx. 1 grant grant  983911 Apr 13 16:32 mysql-connector-java-5.1.38.jar
lrwxrwxrwx. 1 root  root       31 Apr 13 16:37 mysql-connector-java.jar -> mysql-connector-java-5.1.38.jar
-rw-r--r--. 1 root  root    18387 Aug  2  2017 rhino-examples.jar
lrwxrwxrwx. 1 root  root        6 Apr  8 11:37 rhino.jar -> js.jar
-rw-r--r--. 1 root  root    92284 Mar  6  2015 tagsoup.jar

Cloudera Manager安装

配置本地repo源

下载CM6.3的安装包,地址为

https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
https://archive.cloudera.com/cm6/6.3.0/allkeys.asc
[root@node01 cm6.3]# ll
total 1378004
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

下载CDH6.3的安装包,地址为

https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha1
https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha256
https://archive.cloudera.com/cdh6/6.3.0/parcels/manifest.json
[root@node01 cdh6.3]# ll
total 2036852
-rw-r--r--. 1 root root 2085690155 Apr 14 14:49 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r--. 1 root root         40 Apr 14 14:46 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
-rw-r--r--. 1 root root      33887 Apr 14 14:46 manifest.json

将Cloudera Manager安装需要的6个rpm包以及一个asc文件下载到本地,放在同一目录,执行createrepo命令生成rpm元数据。

[root@node01 cm6.3]# createrepo .
Spawning worker 0 with 3 pkgs
Spawning worker 1 with 3 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[root@node01 cm6.3]# ll
total 1378008
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
drwxr-xr-x. 2 root root       4096 Apr 14 16:50 repodata

配置web服务器
将上述cdh6.3/cm6.3目录移动到/var/www/html目录下, 使得用户可以通过HTTP访问这些rpm包。

[root@node01 soft]# mv cm6.3/ cdh6.3/ /var/www/html/
[root@node01 soft]# ll  /var/www/html/*6.3
/var/www/html/cdh6.3:
total 2036852
-rw-r--r--. 1 root root 2085690155 Apr 14 14:49 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r--. 1 root root         40 Apr 14 14:46 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
-rw-r--r--. 1 root root      33887 Apr 14 14:46 manifest.json

/var/www/html/cm6.3:
total 1378008
-rw-r--r--. 1 root root      14041 Mar 13 07:12 allkeys.asc
-rw-r--r--. 1 root root   10479136 Aug  1  2019 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root 1201341068 Apr 14 16:11 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      11464 Apr 14 14:54 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root      10996 Aug  1  2019 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root   14209884 Aug  1  2019 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r--. 1 root root  184988341 Apr 14 16:26 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
drwxr-xr-x. 2 root root       4096 Apr 14 16:50 repodata

验证浏览器能否正常访问
在这里插入图片描述
发现Forbidden,访问失败,网上搜索很多说需要修改httpd.conf的根文件的访问权限,把deny改为allowed之类的,我发现我自己的没有问题。遂查看selinux状态

[root@node01 shell]# getenforce
permissive

发现自己禁用selinux后没有重启机器,重启之后访问
在这里插入图片描述
在这里插入图片描述
制作Cloudera Manager的repo源
进入yum.repos.d目录,执行

[root@node01 shell]# vim /etc/yum.repos.d/cm.repo
[cmrepo]
name = cm_repo
baseurl = http://node01/cm6.3
enable = true
gpgcheck = false
[root@node01 shell]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
base                                                                                                                                                | 3.6 kB  00:00:00     
cmrepo                                                                                                                                              | 2.9 kB  00:00:00     
extras                                                                                                                                              | 2.9 kB  00:00:00     
updates                                                                                                                                             | 2.9 kB  00:00:00     
cmrepo/primary_db                                                                                                                                   | 8.6 kB  00:00:00     
repo id                                                                          repo name                                                                           status
base/7/x86_64                                                                    CentOS-7 - Base                                                                     10,097
cmrepo                                                                           cm_repo                                                                                  6
extras/7/x86_64                                                                  CentOS-7 - Extras                                                                      341
local_iso                                                                        CentOS-7 - Media                                                                     4,067
osrepo                                                                           os_repo                                                                              4,067
updates/7/x86_64                                                                 CentOS-7 - Updates                                                                   1,787
repolist: 20,365

验证安装JDK

[root@node01 shell]# yum -y install oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

安装Cloudera Manager Server

通过yum安装Cloudera Manager Server

[root@node01 shell]# yum -y install cloudera-manager-server

初始化数据库

[root@node01 shell]# /opt/cloudera/cm/schema/scm_prepare_database.sh mysql cm cm 123
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.8.0_181-cloudera/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
2020-04-16 10:31:07,354 [main] INFO  com.cloudera.enterprise.dbutil.DbCommandExecutor  - Successfully connected to database.
All done, your SCM database is configured correctly!

启动Cloudera Manager Server

[root@node01 shell]# systemctl start cloudera-scm-server
[root@node01 shell]# systemctl status cloudera-scm-server
● cloudera-scm-server.service - Cloudera CM Server Service
   Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-04-16 10:32:00 CST; 32s ago
 Main PID: 3116 (java)
    Tasks: 24
   CGroup: /system.slice/cloudera-scm-server.service
           └─3116 /usr/java/jdk1.8.0_181-cloudera/bin/java -cp .:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/post...

Apr 16 10:32:00 node01 systemd[1]: Started Cloudera CM Server Service.
Apr 16 10:32:00 node01 cm-server[3116]: JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Apr 16 10:32:00 node01 cm-server[3116]: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
Apr 16 10:32:03 node01 cm-server[3116]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the consol...on logging.
Apr 16 10:32:13 node01 cm-server[3116]: 10:32:13.695 [main] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Table 'cm.CM_VERSION' doesn't exist
Hint: Some lines were ellipsized, use -l to show in full.

检查端口是否监听

[root@node01 shell]# netstat -lnpt |grep 7180
tcp        0      0 0.0.0.0:7180            0.0.0.0:*               LISTEN      3116/java    

通过http://cmip:7180/cmf/login访问CM
在这里插入图片描述

安装CDH

admin/admin登录到CM
在这里插入图片描述
同意协议
在这里插入图片描述
此处我选择了免费版本
在这里插入图片描述
点击“继续”,输入集群名称,此处我用“My Cluster01”。
在这里插入图片描述
输入主机IP或者名称,点击搜索找到主机后点击继续
在这里插入图片描述
这一步搜索主机感觉比Ambari+HDP稍好一点,Ambari会对域名的格式要求很严格,例如xxx.xxx.xxx格式。
在这里为了宿主机访问方便,将域名映射添加到本地,编辑宿主机的C:\Windows\System32\drivers\etc\hosts文件,添加
在这里插入图片描述
选择自定义存储库,输入cm的http地址
在这里插入图片描述
点更多选项,删除掉其他的url
在这里插入图片描述
继续
在这里插入图片描述
配置ssh账号和密码
在这里插入图片描述
继续,安装,没想到这步报了错,安装失败,查看日志,前面我做了local_os.repo镜像,但是重启了机器之后挂载的DVD1文件不见了

[root@node01 yum.repos.d]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G   24G   55G  30% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M   12K  378M   1% /run/user/42
tmpfs           378M     0  378M   0% /run/user/0

遂重新挂载

[root@node01 var]# mount -o loop /dev/cdrom /media/DVD1
[root@node01 var]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   13M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2        83G   24G   55G  30% /
/dev/sda1       976M  149M  761M  17% /boot
tmpfs           378M     0  378M   0% /run/user/0
tmpfs           378M   32K  378M   1% /run/user/1000
/dev/sr0        4.4G  4.4G     0 100% /run/media/grant/CentOS 7 x86_64
/dev/loop0      4.4G  4.4G     0 100% /media/DVD1

makecache一下试试

[root@node01 var]# yum makecache

三台都挂载了则都需挂载,设置开机自动挂载。

[root@node01 yum.repos.d]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Apr  8 11:32:44 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=9a933bb8-c407-4a73-b19d-bec424ee2420 /                       ext4    defaults        1 1
UUID=5787528a-3e3d-49ba-91dd-261678552c5e /boot                   ext4    defaults        1 2
UUID=475405cb-8591-45fb-a19e-79ae02783a5e swap                    swap    defaults        0 0
/dev/cdrom /media/DVD1 default,loop 0 0

保存后执行激活自动挂载

[root@node01 yum.repos.d]# mount -a

其实没制作local_os.repo就没问题,物理机需要制作,虚拟机自己玩没必要。所以可以将local_os.repo放入备份目录里,然后yum clean all。
没问题,继续尝试安装。
在这里插入图片描述
如果有错误或者黄色警告,查看“显示检查器结果”,并逐项解决,然后“重新运行”检查,直到所有的检查都通过,否则没办法点击继续下一步。
果然有问题,查看日志
在这里插入图片描述
发现node03的透明大页面没有关闭,是因为重启机器之后没有生效,没有改变rc.local的运行权限
每台机器都要做:

[root@node03 yum.repos.d]# ll /etc/rc.d/rc.local
-rwxr-xr-x. 1 root root 713 Apr 16 15:37 /etc/rc.d/rc.local

在这里插入图片描述

集群设置安装向导

可以自定义服务,这里选择Data Engineering,里面有spark
在这里插入图片描述
怕node01负载太重,将node02和node03作为datanode,zookeeper也三台都安装。
注意:Activity Monitor和Telemetry Publisher不用选择任何主机,留空,即不安装,因为用不到。
在这里插入图片描述
点击“继续”,进入下一步,测试数据库连接
在这里插入图片描述
测试成功,点击“继续”,进入目录设置,此处使用默认默认目录,根据实际情况进行目录修改,我这里没有修改,全部按照默认。
在这里插入图片描述
过程中报错,说namenode格式化问题,遂在主节点执行

[root@node01 10-hdfs-NAMENODE-format]# rm -rf /dfs/nn

从节点:

[root@node02 rh]# rm -rf /dfs/dn

在这里插入图片描述
至此,配置完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章