Linux實戰之MySQL數據庫——基於MHA的Mysql集羣架構

MySQL MHA架構介紹

MHA(Master High Availability)目前在MySQL高可用方面是一個相對成熟的解決方案,它由日本DeNA公司youshimaton(現就職於Facebook公司)開發,是一套優秀的在MySQL集羣環境中故障切換主從提升的高可用軟件。在MySQL故障切換過程中,MHA能做到在0~30秒之內自動完成數據庫的故障切換操作,並且在進行故障切換的過程中,MHA能在最大程度保證數據的一致性,以達到真正意義上的高可用。

該軟件由兩部分組成:MHA Manager(管理節點)和MHA Node(數據節點)。
MHA Manager可以單獨部署在一臺獨立的機器上管理多個master-slave集羣,也可以部署在一臺slave節點上。
MHA Node運行在每臺MySQL服務器上,MHA Manager會定時探測集羣中的master節點,當master出現故障時,它可以自動將最新數據的slave提升爲新的master,然後將所有其他的slave重新指向新的master。整個故障轉移過程對應用程序完全透明。
在MHA自動故障切換過程中,MHA試圖從宕機的主服務器上保存二進制日誌,最大程度的保證數據的不丟失,但這並不總是可行的。
例如,如果主服務器硬件故障或無法通過ssh訪問,MHA沒法保存二進制日誌,只進行故障轉移而丟失了最新的數據。
MHA可以與半同步複製結合起來。如果只有一個slave已經收到了最新的二進制日誌,MHA可以將最新的二進制日誌應用於其他所有的slave服務器上,因此可以保證所有節點的數據一致性。

目前MHA主要支持一主多從的架構,要搭建MHA,要求一個複製集羣中必須最少有三臺數據庫服務器,一主二從,即一臺充當master,一臺充當備用master,另外一臺充當從庫。

可以將MHA工作原理總結爲如下:
(1)從宕機崩潰的master保存二進制日誌事件(binlog events);
(2)識別含有最新更新的slave;
(3)應用差異的中繼日誌(relay log)到其他的slave;
(4)提升一個slave爲新的master;
(5)使其他的slave連接新的master進行復制。

MHA軟件由兩部分組成,Manager工具包和Node工具包。
Manager工具包主要包括以下幾個工具:

masterha_check_ssh #檢查MHA的SSH配置狀況
masterha_check_repl #檢查MySQL複製狀況
masterha_manger #啓動MHA
masterha_check_status #檢測當前MHA運行狀態
masterha_master_monitor #檢測master是否宕機
masterha_master_switch #控制故障轉移(自動或者手動)
masterha_conf_host #添加或刪除配置的server信息

Node工具包(這些工具通常由MHA Manager的腳本觸發,無需人爲操作)主要包括以下幾個工具:

save_binary_logs #保存和複製master的二進制日誌
apply_diff_relay_logs #識別差異的中繼日誌事件並將其差異的事件應用於其他的slave
filter_mysqlbinlog #去除不必要的ROLLBACK事件(MHA已不再使用這個工具)
purge_relay_logs #清除中繼日誌(不會阻塞SQL線程)

MHA部署

部署環境說明

角色 IP地址 主機名 SERVER_ID 類型
Master/MHAManager 192.168.213.126 mha 1 寫入/監控複製組
Slave/備選Master 192.168.213.131 mha-node1 2
Slave 192.168.213.132 mha-node2 3

其中master對外提供寫服務,備選Master提供讀服務,slave也提供相關讀服務,一旦Master宕機,將會把備選Master提升爲新的Master,slave指向新的Master。

基礎環境配置

節點系統均爲Centos7.7

[root@mysql ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

三節點配置epel的yum源,安裝依賴包

[root@mha ~]# yum list |grep epel-release
epel-release.noarch      7-12     @epel
[root@mha ~]# yum install epel-release.noarch -y
[root@mha ~]# yum install perl-DBD-MYSQL ncftp -y

配置主機名及域名解析

[root@mha ~]# tail -n 3 /etc/hosts
192.168.213.126 mha
192.168.213.131 mha-node1
192.168.213.132 mha-node2

配置時間同步

[root@mha ~]# ntpdate cn.pool.ntp.org
[root@mha ~]# hwclock --systohc

處理防火牆

[root@mha ~]# systemctl stop firewalld
[root@mha ~]# systemctl disable firewalld
[root@mha ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@mha ~]# setenforce 0

配置3個節點的主從關係

(1)在主端mha上還原數據庫重新進行初始化

[root@mha ~]# systemctl stop mysqld
[root@mha ~]# systemctl disable mysqld.service
[root@mha ~]# cd /var/lib/mysql
[root@mha mysql]# rm -rf *

修改配置文件

[root@mha mysql]# cat /etc/my.cnf
[mysqld]
server-id=1
gtid_mode=ON
enforce-gtid-consistency=true
master_info_repository=TABLE
relay_log_info_repository=TABLE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

default-authentication-plugin=mysql_native_password

datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

三個節點的配置文件只需要server_id不同即可,將該配置文件拷貝到其他兩節點後,只修改server_id

[root@mha ~]# scp /etc/my.cnf mha-node1:/etc/
[root@mha ~]# scp /etc/my.cnf mha-node2:/etc/
[root@mha-node1 ~]# vim /etc/my.cnf
server-id=2
[root@mha-node2 ~]# vim /etc/my.cnf
server-id=3

進行主節點mha的配置

[root@mha ~]# systemctl start mysqld
[root@mha ~]# grep password /var/log/mysqld.log	#過濾出密碼

初始化安裝

[root@mha ~]# mysql_secure_installation
Enter password for user root:輸入過濾出的密碼
New password:
Re-enter new password:
Change the password for root ? ((Press y|Y for Yes, any other key for No) : N

全部輸入Y
All done!
[root@mha ~]# mysql -p
mysql> show master status\G
*************************** 1. row ***************************
             File: binlog.000002
         Position: 1084
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set: ed0154be-5b91-11ea-84a3-0050563abf3f:1-4

mysql> create user 'copy'@'%' identified with mysql_native_password by 'Cloudbu@123';
mysql> grant replication slave on *.* to 'copy'@'%';
mysql> flush privileges;

(2)進行從端mha-node1的配置還原數據庫

[root@mha-node1 ~]# systemctl start mysqld
[root@mha-node1 ~]# grep password /var/log/mysqld.log
[root@mha-node1 ~]# mysql_secure_installation
[root@mha-node1 ~]# mysql -p
mysql> CHANGE MASTER TO
MASTER_HOST='192.168.213.131',
MASTER_USER='copy',
MASTER_PASSWORD='Cloudbu@123',
MASTER_AUTO_POSITION=1;

mysql> start slave;
mysql> show slave status\G
	Slave_IO_Running: Yes
	Slave_SQL_Running: Yes

(3)進行從端mha-node2的配置還原數據庫

與從端mha-node1的配置相同
(4)測試在主端mha上創建數據庫

[root@mha ~]# mysql -p
mysql> create database anliu;

在兩個從端查看效果已經完成了主從複製

[root@mha-node1 ~]# mysql -p
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| anliu              |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
[root@mha-node2 ~]# mysql -p
mysql> show databases;

配置免祕鑰互信

在三個 mysql 節點分別執行如下操作:(三個都有,包括自己ssh自己)

[root@mha ~]# ssh-keygen -t rsa
[root@mha ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[root@mha ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[root@mha ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

安裝MHA

(1)在3個節點上安裝

[root@mha ~]# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[root@mha ~]# rpm -ivh epel-release-latest-7.noarch.rpm
[root@mha ~]# yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager
[root@mha ~]# wget https://qiniu.wsfnk.com/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
[root@mha ~]# rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm

(2)在manage節點上

[root@mha ~]# wget https://qiniu.wsfnk.com/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
[root@mha ~]# rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm

(3)在server01創建MHA的工作目錄,並且創建相關配置文件

[root@mha ~]# mkdir /etc/masterha
[root@mha ~]# vim /etc/masterha/app1.cnf
[server default]
manager_workdir=/etc/masterha/
manager_log=/etc/masterha/app1.log
master_binlog_dir=/var/lib/mysql
#master_ip_failover_script=/usr/local/bin/master_ip_failover
#master_ip_online_change_script=/usr/local/bin/master_ip_online_change
user=root
password=Zhao123@com
ping_interval=1
remote_workdir=/tmp
repl_user=copy
repl_password=Cloudbu@123
#report_script=/usr/local/send_report
#secondary_check_script=/usr/local/bin/masterha_secondary_check -s server03 -s server02
#shutdown_script=""
ssh_user=root
[server01]
hostname=192.168.213.126
port=3306
[server02]
hostname=192.168.213.131
port=3306
candidate_master=1
check_repl_delay=0
[server03]
hostname=192.168.213.132
port=3306
#no_master=1

(4)檢查MHA Manger到所有MHA Node的SSH連接狀態:

[root@mha ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

在這裏插入圖片描述(5)給root授權(只需授權mha即可,特別注意:mysql8的授權規則不同於老版本)

[root@mha ~]# mysql -p
mysql> create user 'root'@'%' identified by 'Zhao123@com';
mysql> grant all on *.* to 'root'@'%' with grant option;
mysql> flush privileges;

(6)通過masterha_check_repl腳本查看整個集羣的狀態

[root@mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

在這裏插入圖片描述

測試

手動切換

(1)將mha的master切換到mha-node1上面

[root@mha ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.213.131 --new_master_port=3306 --orig_master_is_new_slave
#所有提示均選擇yes
Mon Mar  2 17:01:24 2020 - [info] Switching master to 192.168.213.131(192.168.213.131:3306) co                                                            mpleted successfully.

(2)在mha-node1查看已經變成了master端

[root@mha-node1 ~]# mysql -p
mysql> show slave status\G
Empty set (0.00 sec)
mysql> show master status;
+---------------+----------+--------------+------------------+-------------------------------------------------------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                                                                   |
+---------------+----------+--------------+------------------+-------------------------------------------------------------------------------------+
| binlog.000002 |     3343 |              |                  | c38d86b0-5b95-11ea-b431-00505632d828:1-3,
ed0154be-5b91-11ea-84a3-0050563abf3f:1-11 |
+---------------+----------+--------------+------------------+-------------------------------------------------------------------------------------+

(3)在mha和mha-node2查看slave狀態中主端爲mha-node1

[root@mha-node2 ~]# mysql -p
[root@mha ~]# mysql -p
mysql> show slave status\G

在這裏插入圖片描述
(4)在mha-node1當前主端建立數據表

mysql> create table anliu.linux(
username varchar(10) not null,
password varchar(10) not null);

mysql> show tables in anliu;
+-----------------+
| Tables_in_anliu |
+-----------------+
| linux           |
+-----------------+

(5)在mha和mha-node2查看數據表,數據已經同步

[root@mha ~]# mysql -p
[root@mha-node2 ~]# mysql -p
mysql> show tables in anliu;
+-----------------+
| Tables_in_anliu |
+-----------------+
| linux           |
+-----------------+

自動切換

(1)啓動進程

[root@mha masterha]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_last_failover &
[1] 5294
[root@mha masterha]# nohup: ignoring input and appending output to ‘nohup.out’

(2)模擬故障

[root@mha-node1 ~]# systemctl stop mysqld

(3)在mha-node2上查看,mha已經爲主庫
在這裏插入圖片描述(4)在mha查看

[root@mha ~]# mysql -p
mysql> show slave status\G
Empty set (0.00 sec)

使用自帶腳本配置VIP漂移

通過vip實現mysql的高可用,即客戶端只需要訪問配置好的虛擬ip即可

(1)修改配置文件

[root@mha ~]# cat /etc/masterha/app1.cnf
[server default]
manager_workdir=/etc/masterha/
manager_log=/etc/masterha/app1.log
master_binlog_dir=/var/lib/mysql
master_ip_failover_script=/usr/local/bin/master_ip_failover	#添加腳本
master_ip_online_change_script=/usr/local/bin/master_ip_failover
user=root
password=Zhao123@com
ping_interval=1
remote_workdir=/tmp
repl_user=copy
repl_password=Cloudbu@123
#report_script=/usr/local/send_report
#secondary_check_script=/usr/local/bin/masterha_secondary_check -s server03 -s server02
#shutdown_script=""
ssh_user=root
[server01]
hostname=192.168.213.126
port=3306
candidate_master=1

[server02]
hostname=192.168.213.131
port=3306
candidate_master=1
check_repl_delay=0

[server03]
hostname=192.168.213.132
port=3306
no_master=1

(2)編寫腳本,給腳本加上可執行權限

[root@mha ~]# chmod +x /usr/local/bin/master_ip_failover
[root@mha ~]# cat /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
    $command,          $ssh_user,        $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);
my $vip = '192.168.213.222/24';
my $key = '0';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
GetOptions(
    'command=s'          => \$command,
    'ssh_user=s'         => \$ssh_user,
    'orig_master_host=s' => \$orig_master_host,
    'orig_master_ip=s'   => \$orig_master_ip,
    'orig_master_port=i' => \$orig_master_port,
    'new_master_host=s'  => \$new_master_host,
    'new_master_ip=s'    => \$new_master_ip,
    'new_master_port=i'  => \$new_master_port,
);

exit &main();

sub main {
    print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

    if ( $command eq "stop" || $command eq "stopssh" ) {

        my $exit_code = 1;
        eval {
            print "Disabling the VIP on old master: $orig_master_host \n";&stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn "Got Error: $@\n";
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "start" ) {
        my $exit_code = 10;
        eval {
            print "Enabling the VIP - $vip on the new master - $new_master_host \n";
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "status" ) {
        print "Checking the Status of the script.. OK \n";
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}

sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
    return 0  unless  ($ssh_user);
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
    print
    "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

(3)在當前的master上添加VIP

[root@mha ~]# ifconfig ens33:0 192.168.213.222

(4)進行repl測試,若測試通過,再啓動MHA監控

[root@mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
MySQL Replication Health is OK.
[root@mha ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf > /tmp/mha_manager.log 2>&1 &
[root@mha ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:1908) is running(0:PING_OK), master:192.168.213.126

(5)VIP漂移測試
mha主服務器上有虛擬IP,停掉MySQL服務,查看VIP的漂移和主從的切換

[root@mha ~]# systemctl stop mysqld
[root@mha ~]# ifconfig	#ens33:0不在了

[root@mha-node1 ~]# ifconfig ens33:0	#漂移到了備用主服務器上
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.213.222  netmask 255.255.255.0  broadcast 192.168.213.255
        ether 00:50:56:32:d8:28  txqueuelen 1000  (Ethernet)

在備用服務器上查看

[root@mha-node1 ~]# mysql -p -e "show processlist"
Enter password:
+----+-----------------+-----------------+------+------------------+------+---------------------------------------------------------------+------------------+
| Id | User            | Host            | db   | Command          | Time | State                                                         | Info             |
+----+-----------------+-----------------+------+------------------+------+---------------------------------------------------------------+------------------+
|  4 | event_scheduler | localhost       | NULL | Daemon           | 8089 | Waiting on empty queue                                        | NULL             |
| 56 | copy            | mha-node2:55444 | NULL | Binlog Dump GTID | 1288 | Master has sent all binlog to slave; waiting for more updates | NULL             |
| 63 | root            | localhost       | NULL | Query            |    0 | starting                                                      | show processlist |
+----+-----------------+-----------------+------+------------------+------+---------------------------------------------------------------+------------------+

可以看到虛擬IP已經漂移,並且主從已經切換
(6)後續恢復
修復好mha後,配置其主服務器爲mha-node1

[root@mha ~]# rm -rf /etc/masterha/mha.failover.complete
[root@mha ~]# systemctl start mysqld
[root@mha ~]# mysql -p
mysql> CHANGE MASTER TO
MASTER_HOST='192.168.213.131',
MASTER_USER='copy',
MASTER_PASSWORD='Cloudbu@123',
MASTER_AUTO_POSITION=1;
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.213.131
                  Master_User: copy
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000004
          Read_Master_Log_Pos: 235
               Relay_Log_File: mha-relay-bin.000002
                Relay_Log_Pos: 363
        Relay_Master_Log_File: binlog.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

MHA命令

1.查看ssh是否可以免密連接
masterha_check_ssh --conf=/etc/masterha/app1.cnf
2.查看複製是否建立好
masterha_check_repl --conf=/etc/masterha/app1.cnf
3.啓動mha
nohup masterha_manager --conf=/etc/masterha/app1.cnf > /tmp/mha_manager.log 2>&1 &
4.檢查啓動的狀態
masterha_check_status --conf=/etc/masterha/app1.cnf
5.停止mha
masterha_stop masterha_check_status --conf=/etc/masterha/app1.cnf
6.failover後下次重啓
rm -rf /etc/masterha/mha.failover.complete
每次failover切換後會在管理目錄生成文件app1.failover.complete ,下次在切換的時候會發現有這個文件導致切換不成功,需要手動清理掉

使用keepalive配置VIP漂移

(1)安裝keepalive軟件
在當前的主和要作爲備份的主上,安裝keepalive

yum install -y keepalived

(2)修改主和備用主的keepalived配置文件

[root@mha ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    smtp_server 192.168.200.1
    smtp_connect_timeout 30
    router_id MYSQL_HA
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.213.222
    }
}

keepalive配置完成之後,啓動keepalive,可以先測試keepalive是否會完成VIP的漂移

[root@mha ~]# service keepalived start
[root@mha ~]# ip a
[root@mha-node1 ~]# service keepalived start
[root@mha ~]# service keepalived stop
[root@mha-node1 ~]# ip a	#VIP已漂移√

(3)設置failover腳本,在MHA配置文件中指定failover的腳本位置

[root@mha ~]# vim /etc/masterha/app1.cnf
#failover參數位置
master_ip_failover_script= /usr/local/bin/master_ip_failover --ssh_user=root
#在線修改腳步位置
master_ip_online_change_script=/usr/local/bin/master_ip_online_change --ssh_user=root

編輯failover腳本

[root@mha ~]# chmod +x /usr/local/bin/master_ip_failover
[root@mha ~]# cat /usr/local/bin/master_ip_failover
#!/usr/bin/env perl

use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
    $command, $ssh_user, $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);

my $ssh_start_vip = "service keepalived start";
#my $ssh_start_vip = "systemctl start keepalived.service";
#my $ssh_stop_vip = "systemctl stop keepalived.service";
my $ssh_stop_vip = "service keepalived stop";

GetOptions(
    'command=s'          => \$command,
    'ssh_user=s'         => \$ssh_user,
    'orig_master_host=s' => \$orig_master_host,
    'orig_master_ip=s'   => \$orig_master_ip,
    'orig_master_port=i' => \$orig_master_port,
    'new_master_host=s'  => \$new_master_host,
    'new_master_ip=s'    => \$new_master_ip,
    'new_master_port=i'  => \$new_master_port,
);

exit &main();

sub main {
    print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

    if ( $command eq "stop" || $command eq "stopssh" ) {

        my $exit_code = 1;
        eval {
            print "Disabling the VIP on old master: $orig_master_host \n";
            &stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn "Got Error: $@\n";
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "start" ) {

        my $exit_code = 10;
        eval {
            print "Enabling the VIP on the new master - $new_master_host \n";
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "status" ) {
        print "Checking the Status of the script.. OK \n";
        #`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}

# A simple system call that enable the VIP on the new master
sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
    return 0  unless  ($ssh_user);
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
    print
    "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

檢查複製狀態

[root@mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
MySQL Replication Health is OK.

(4)開啓MHA的監控狀態
nohup masterha_manager --conf=/etc/masterha/app1.cnf > /tmp/mha_manager.log 2>&1 &
檢查啓動的狀態
masterha_check_status --conf=/etc/masterha/app1.cnf

問題

1.按照此方法搭建的環境測試失敗,systemctl stop mysqld無法觸發MHA的failover腳本,keepalived的VIP漂移切換失敗,按照keepalived實現mysql雙主架構(check_mysql.sh)搭建可以進行keepalived切換。
失敗的原因可能是master_ip_failover腳本邏輯設計錯誤,檢測時是MySQL Replication Health is OK.

2.keepalived本身是在機器宕機時纔會實現漂移功能,我們的目標是要MySQL實例宕機後要實現故障切換,還需要輔助的腳本來幫助keepalived來實現更靈活的漂移。

關於MYSQL的高可用架構:

1.單獨使用MHA自帶腳本可以實現VIP漂移,配置mysql一主多從架構
2.單獨使用keepalived可以實現VIP漂移,配置mysql雙主架構和一主多從架構
3.那麼什麼情況下應該使用MHA,什麼情況下使用keepalived?

MHA,其實是實現了數據一致性的問題的,主要考慮在master宕機了後保證slave的數據損失最小,而keepalived就是實現vip的高可用。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章