http://ywliyq.blog.51cto.com/11433965/1856974
本文爲南非螞蟻的書籍《循序漸進linux-第二版》-8.4的讀筆記
MMM集羣套件(MYSQL主主複製管理器)
MMM套件主要的功能是通過下面三個腳本實現的
1)mmm_mond
這是一個監控進程,運行在管理節點上,主要負責都所有數據庫的監控工作,同時決定和處理所有節點的角色切換
2)mmm_agentd
這是一個代理進程,運行在每個MYSQL服務器上,主要完成監控的測試工作以及執行簡單的遠端服務設置
3)mmm_control
簡單的管理腳本,用來查看和管理集羣運行狀態,同事管理mmm_mond進程
MMM方案並不太適應於對數據安全性要求很高並且讀、寫頻繁的環境中
================================================
8.4.2 MMM典型應用方案
MMM雙Master節點應用架構
在雙Master節點的基礎上,增加多個Slave節點,即可實現雙主多從節點應用架構
雙主多從節點的MYSQL架構適合讀查詢量非常大的業務環境,通過MMM提供的讀IP和寫IP可以輕鬆實現MYSQL的讀寫、分離架構
================================================
8.4.3 MMM高可用MYSQL方案架構圖
雙主雙從的MYSQL高可用集羣架構
服務器配置環境如表:
MMM雙主雙從應用架構對應的讀、寫分離IP列表:
8.4.4 MMM的安裝與配置
4個主從節點上使用yum安裝mysql數據庫並設置密碼
# yum -y install mysql mysql-server
啓動mysql
# /etc/init.d/mysqld start
創建mysql密碼:(jzh0024)
# mysql_secure_installation
默認密碼爲空,一直y即可
至此,mysql數據庫安裝完成。
-----------------------------
1.修改mysql配置文件,所有mysql主機上設置read_only參數,
/etc/my.cnf配置,[mysqld]段添加:
server-id = 1
log-bin=mysql-bin
relay-log = mysql-relay-bin
replicate-wild-ignore-table=mysql.%
replicate-wild-ignore-table=test.%
replicate-wild-ignore-table=information_schema.%
read_only=1
其中\
Master1的server-id = 1
Master2的server-id = 2
Slave1的server-id = 3
Slave1的server-id = 4
重啓mysql數據庫
# /etc/init.d/mysqld restart
-----------------------------
1.MMM套件的安裝
使用yum方式安裝MMM套件,所有節點安裝epel的yum源
# cd /server/tools/
上傳epel-release-6-8.noarch.rpm源文件
# rpm -Uvh epel-release-6-8.noarch.rpm
在monitor節點執行命令:
[root@Monitor tools]# yum -y install mysql-mmm*
4個MYSQL_db節點只需要安裝mysql-mmm-agent即可,執行命令
[root@Master1 tools]# yum -y install mysql-mmm-agent
安裝完成後查看安裝的mmm版本
[root@Monitor tools]# rpm -qa|grep mysql-mmm
mysql-mmm-2.2.1-2.el6.noarch
mysql-mmm-tools-2.2.1-2.el6.noarch
mysql-mmm-monitor-2.2.1-2.el6.noarch
mysql-mmm-agent-2.2.1-2.el6.noarch
至此,MMM集羣套件安裝完成。
-------------------------------------
2. MMM集羣套件的配置
在進行MMM套件配置之前,需要事先配置Master1到Master2之間的主主互爲同步,
同時還需要配置Master1到Slave1、Slave2之間爲主從同步
一、配置Master1到Master2之間的主主互爲同步
Master1先創建一個數據庫及表,用於同步測試
mysql> create database master001;
mysql> use master001;
創建表
mysql>
create table permaster001(member_no char(9) not null,name
char(5),birthday date,exam_score tinyint,primary key(member_no));
查看錶內容
mysql> desc permaster001;
Master1進行鎖表並備份數據庫
mysql> flush tables with read lock;
Query OK, 0 rows affected (0.00 sec)
不要退出終端,否則鎖表失敗;新開啓一個終端對數據進行備份,或者使用mysqldump進行備份
# cd /var/lib/
# tar zcvf mysqlmaster1.tar.gz mysql
# scp -P50024 mysqlmaster1.tar.gz [email protected]:/var/lib/
[email protected]'s password:
mysqlmaster1.tar.gz 100% 214KB 213.9KB/s 00:00
注意:此處需要開啓Master2授權root遠程登錄
# vim /etc/ssh/sshd_config
#PermitRootLogin no
並重啓ssh連接
[root@Master1 lib]# /etc/init.d/sshd restart
數據傳輸到Master2後,依次重啓Master1,Master2的數據庫
[root@Master1 lib]# /etc/init.d/mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
[root@Master2 tools]# /etc/init.d/mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
3.創建複製用戶並授權
Master1上創建複製用戶,
mysql> grant replication slave on *.* to 'repl_user'@'10.24.24.21' identified by 'repl_password';
Query OK, 0 rows affected (0.00 sec)
刷新授權表
mysql> grant replication slave on *.* to 'repl_user'@'10.24.24.21' identified by 'repl_password';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 345 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
然後在Master2的數據庫中將Master1設爲自己的主服務器
# cd /var/lib/
# tar xf mysqlmaster1.tar.gz
mysql> change master to \
-> master_host='10.24.24.20',
-> master_user='repl_user',
-> master_password='repl_password',
-> master_log_file='mysql-bin.000002',
-> master_log_pos=345;
需要注意master_log_file和master_log_pos選項,這兩個值是剛纔在Master1上查詢到的結果
Master2上啓動從服務器,並查看DB2上的從服務器運行狀態
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.24.24.20
Master_User: repl_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 345
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
至此,Master1到Master2的MYSQL主從複製已完成。
驗證數據的完整性,自己創建庫或者表來進行驗證數據是否同步。
---------------------------------------------
二、配置Master2到Master1的主從複製
Master2數據庫中創建複製用戶
mysql> grant replication slave on *.* to 'repl_user1'@'10.24.24.20' identified by 'repl_password1';
刷新授權表
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 347 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
在Master1的數據庫中將Master2設爲自己的主服務器
mysql> change master to \
-> master_host='10.24.24.21',
-> master_user='repl_user1',
-> master_password='repl_password1',
-> master_log_file='mysql-bin.000002',
-> master_log_pos=347;
在Master1上啓動從服務器
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
查看Master1上從服務器的運行狀態
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.24.24.21
Master_User: repl_user1
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 347
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: mysql.%,test.%,information_schema.%
Slave_IO_Running和Slave_SQL_Running都處於YES狀態。表明DB1上覆制服務運行正常,mysql雙主模式主從複製配置完畢。
驗證數據的完整性,自己創建庫或者表來進行驗證數據是否同步。
-------------------------------------------------------
三、配置Master1和Slave1、Slave2之間的主從同步
需要注意的是,在Slave1、Slave2和主Master同步時,"Master_Host"的地址要添加Master節點的物理IP地址,而不是虛擬IP地址。
Master1先創建一個數據庫及表,用於同步測試
mysql> create database slave012;
mysql> use slave012;
創建表
mysql>
create table persalve012(member_no char(9) not null,name
char(5),birthday date,exam_score tinyint,primary key(member_no));
查看錶內容
mysql> desc persalve012;
Master1進行鎖表並備份數據庫
mysql> flush tables with read lock;
Query OK, 0 rows affected (0.00 sec)
不要退出終端,否則鎖表失敗;新開啓一個終端對數據進行備份,或者使用mysqldump進行備份
# cd /var/lib/
# tar zcvf mysqlslave12.tar.gz mysql
發送到Slave1:
[root@Master1 lib]# scp -P50024 mysqlslave12.tar.gz [email protected]:/var/lib/
The authenticity of host '[10.24.24.22]:50024 ([10.24.24.22]:50024)' can't be established.
RSA key fingerprint is 26:b4:7d:98:3e:ab:19:ba:08:c9:46:9b:fb:12:5d:72.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[10.24.24.22]:50024' (RSA) to the list of known hosts.
[email protected]'s password:
mysqlslave12.tar.gz 100% 215KB 215.2KB/s 00:00
發送到Slave2:
[root@Master1 lib]# scp -P50024 mysqlslave12.tar.gz [email protected]:/var/lib/
The authenticity of host '[10.24.24.23]:50024 ([10.24.24.23]:50024)' can't be established.
RSA key fingerprint is 26:b4:7d:98:3e:ab:19:ba:08:c9:46:9b:fb:12:5d:72.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[10.24.24.23]:50024' (RSA) to the list of known hosts.
[email protected]'s password:
mysqlslave12.tar.gz 100% 215KB 215.2KB/s 00:00
注意:此處需要開啓Master2授權root遠程登錄
# vim /etc/ssh/sshd_config
#PermitRootLogin no
並重啓ssh連接
[root@Master1 lib]# /etc/init.d/sshd restart
數據傳輸到Slave1、Slave2後,依次重啓Master1,Slave1、Slave2的數據庫
[root@Master1 lib]# /etc/init.d/mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
[root@Slave1 tools]# /etc/init.d/mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
[root@Slave2 tools]# /etc/init.d/mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
3.創建複製用戶並授權
Master1上創建slave1、slave2的複製用戶
mysql> grant replication slave on *.* to 'repl_user'@'10.24.24.22' identified by 'repl_password';
mysql> grant replication slave on *.* to 'repl_user'@'10.24.24.23' identified by 'repl_password';
刷新授權表
mysql> flush privileges;
mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 1296 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
然後在slave1、slave2的數據庫中將Master1設爲自己的主服務器
# cd /var/lib/
# tar xf mysqlslave12.tar.gz
mysql> change master to \
-> master_host='10.24.24.20',
-> master_user='repl_user',
-> master_password='repl_password',
-> master_log_file='mysql-bin.000002',
-> master_log_pos=1296;
需要注意master_log_file和master_log_pos選項,這兩個值是剛纔在Master1上查詢到的結果
slave1、slave2上啓動從服務器,並查看slave1、slave2上的從服務器運行狀態
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.24.24.20
Master_User: repl_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 1296
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: mysql.%,test.%,information_schema.%
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 1296
Relay_Log_Space: 406
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
至此,Master1到slave1、slave2的MYSQL主從複製已完成。
驗證數據的完整性,自己創建庫或者表來進行驗證數據是否同步。
以上所有設置完成後,重新啓動所有節點的mysql服務。
================================================
要配置MMM,需要現在所有mysql節點創建複製帳號之外的另外兩個帳號
即monitor帳號和monitor agent帳號
monitor帳號是MMM管理服務器用來對所有mysql服務器做健康檢查的
monitor agent帳號用來切換隻讀模式和同步
所有mysql節點上創建帳號並授權:
mysql> grant replication client on *.* to 'mmm_monitor'@'10.24.24.%' identified by 'monitor_password';
mysql> grant super,replication client,process on *.* to 'mmm_agent'@'10.24.24.%' identified by 'agent_password';
刷新授權表
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
通過yum安裝MMM後,默認設置文件目錄爲/etc/mysql-mmm
4個配置文件:
mmm_mon.conf 僅在MMM管理端進行配置,主要用於配置一些監控參數;
mmm_common.conf 文件需要在所有的MMM集羣節點進行配置,且內容完成一樣,主要用於設置讀、寫節點的IP及配置虛擬IP;
mmm_agent.conf 也需要在所有mysql節點進行配置,用來設置每個節點的標識。
(1)Monitor MMM集羣管理端和所有mysql節點配置/etc/mysql-mmm/mmm_common.conf 文件
# cp /etc/mysql-mmm/mmm_common.conf /etc/mysql-mmm/mmm_common.conf.bak
# vim /etc/mysql-mmm/mmm_common.conf
<<--------------------------------我是華麗的代碼線------------------------------>>
active_master_role writer
<host default>
cluster_interface eth0
pid_path /var/run/mysql-mmm/mmm_agentd.pid
bin_path /usr/libexec/mysql-mmm/
replication_user repl_user
replication_password repl_password
agent_user mmm_agent
agent_password agent_password
</host>
<host db1>
ip 10.24.24.20
mode master
peer db2
</host>
<host db2>
ip 10.24.24.21
mode master
peer db1
</host>
<host db3>
ip 10.24.24.22
mode slave
</host>
<host db4>
ip 10.24.24.23
mode slave
</host>
<role writer>
hosts db1, db2
ips 10.24.24.30
mode exclusive
</role>
<role reader>
hosts db1, db2, db3, db4
ips 10.24.24.31, 10.24.24.32, 10.24.24.33, 10.24.24.34
mode balanced
</role>
<<--------------------------------我是華麗的代碼線------------------------------>>
參數配置詳解
(2)所有mysql節點配置mmm_agent.conf文件
# vim /etc/mysql-mmm/mmm_agent.conf
master1節點爲:
include mmm_common.conf
this db1
master2節點爲:
include mmm_common.conf
this db2
slave1節點爲:
include mmm_common.conf
this db3
slave2節點爲:
include mmm_common.conf
this db4
(3)MMM管理節點monitor上配置mmm_mon.conf
# vim /etc/mysql-mmm/mmm_mon.conf
<<--------------------------------我是華麗的代碼線------------------------------>>
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mysql-mmm/mmm_mond.pid
bin_path /usr/libexec/mysql-mmm
status_path /var/lib/mysql-mmm/mmm_mond.status
ping_ips 10.24.24.1, 10.24.24.20, 10.24.24.21, 10.24.24.22, 10.24.24.23
flap_duration 3600
flap_count 3
auto_set_online 0
# The kill_host_bin does not exist by default, though the monitor will
# throw a warning about it missing. See the section 5.10 "Kill Host
# Functionality" in the PDF documentation.
#
# kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host
#
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password monitor_password
</host>
debug 0
<<--------------------------------我是華麗的代碼線------------------------------>>
參數配置詳解
(4)配置MMM集羣所有節點設置
# vim /etc/default/mysql-mmm-agent
ENABLED=1
至此,MMM集羣的4個主要配置文件配置完成,將mmm_common.conf文件從MMM集羣管理節點依次複製到4個mysql節點即可;
上傳配置文件
# cd /etc/mysql-mmm/
# rz
# chmod 640 mmm_common.conf
MMM集羣中所有配置文件的權限最好都設置爲640,否則啓動MMM服務的時候可能會出錯。
=======================================================
8.4.5 MMM的管理
1. MMM集羣服務管理
[root@Monitor ~]# /etc/init.d/mysql-mmm-monitor
Usage: /etc/init.d/mysql-mmm-monitor {start|stop|restart|condrestart|status}
[root@Master1 ~]# /etc/init.d/mysql-mmm-agent
Usage: /etc/init.d/mysql-mmm-agent {start|stop|restart|condrestart|status}
在完成MMM集羣配置後,可以通過這兩個腳本來啓動MMM集羣
在MMM集羣管理端啓動mysql-mmm-monitor服務
[root@Monitor ~]# /etc/init.d/mysql-mmm-monitor start
Starting MMM Monitor Daemon: [ OK ]
在每個mysql代理端依次啓動agent服務
[root@Master1 ~]# /etc/init.d/mysql-mmm-agent start
Starting MMM Agent Daemon: [ OK ]
--------------------------------------------
2. MMM基本維護管理
<<--------------------------------我是華麗的代碼線------------------------------>>
[root@Monitor ~]# mmm_control help
Valid commands are:
help - show this message
ping - ping monitor
show - show status
checks [<host>|all [<check>|all]] - show checks status
set_online <host> - set host <host> online
set_offline <host> - set host <host> offline
mode - print current mode.
set_active - switch into active mode.
set_manual - switch into manual mode.
set_passive - switch into passive mode.
move_role [--force] <role> <host> - move exclusive role <role> to host <host>
(Only use --force if you know what you are doing!)
set_ip <ip> <host> - set role with ip <ip> to host <host>
<<--------------------------------我是華麗的代碼線------------------------------>>
參數詳解如下:
幾個常用的MMM集羣維護管理的例子:
1)查看集羣的運行狀態
[root@Monitor mysql-mmm]# mmm_control show
db1(10.24.24.20) master/ONLINE. Roles: reader(10.24.24.33), writer(10.24.24.30)
db2(10.24.24.21) master/ONLINE. Roles: reader(10.24.24.34)
db3(10.24.24.22) slave/ONLINE. Roles: reader(10.24.24.31)
db4(10.24.24.23) slave/ONLINE. Roles: reader(10.24.24.32)
在MMM集羣中,集羣節點的狀態有如下幾種:
2)查看MMM集羣目前處於什麼運行模式
[root@Monitor mysql-mmm]# mmm_control mode
ACTIVE
3)查看所有MMM集羣節點的運行狀態
[root@Monitor mysql-mmm]# mmm_control checks all
db4 ping [last change: 2016/09/13 15:11:34] OK
db4 mysql [last change: 2016/09/13 15:11:34] OK
db4 rep_threads [last change: 2016/09/13 15:11:34] OK
db4 rep_backlog [last change: 2016/09/13 15:11:34] OK: Backlog is null
db2 ping [last change: 2016/09/13 15:11:34] OK
db2 mysql [last change: 2016/09/13 15:11:34] OK
db2 rep_threads [last change: 2016/09/13 15:11:34] OK
db2 rep_backlog [last change: 2016/09/13 15:11:34] OK: Backlog is null
db3 ping [last change: 2016/09/13 15:11:34] OK
db3 mysql [last change: 2016/09/13 15:11:34] OK
db3 rep_threads [last change: 2016/09/13 15:11:34] OK
db3 rep_backlog [last change: 2016/09/13 15:11:34] OK: Backlog is null
db1 ping [last change: 2016/09/13 15:11:34] OK
db1 mysql [last change: 2016/09/13 15:11:34] OK
db1 rep_threads [last change: 2016/09/13 15:11:34] OK
db1 rep_backlog [last change: 2016/09/13 15:11:34] OK: Backlog is null
4)單獨查看某個節點的運行狀態
[root@Monitor mysql-mmm]# mmm_control checks db1
db1 ping [last change: 2016/09/13 15:11:34] OK
db1 mysql [last change: 2016/09/13 15:11:34] OK
db1 rep_threads [last change: 2016/09/13 15:11:34] OK
db1 rep_backlog [last change: 2016/09/13 15:11:34] OK: Backlog is null
=========================================================
8.4.6 測試MMM實現MYSQL高可用功能
1. 讀、寫分離測試
首先在master1,master2,slave1,slave2上添加遠程訪問授權;
mysql> grant all on *.* to 'root'@'10.24.24.%' identified by 'jzh0024';
mysql> flush privileges;
mysql> select user,host from mysql.user;
通過可寫的VIP登錄到了Master1節點,創建一張表mmm_test,並插入一條數據;
[root@mysql01 ~]# mysql -uroot -pjzh0024 -h 10.24.24.30
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1701
Server version: 5.1.73-log Source distribution
mysql>
mysql> show variables like "%hostname%";
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| hostname | Master1 |
+---------------+---------+
1 row in set (0.00 sec)
mysql>
mysql> create database repldb;
Query OK, 1 row affected (0.00 sec)
mysql> use repldb;
Database changed
mysql> create table mmm_test(id int,email varchar(30));
Query OK, 0 rows affected (0.01 sec)
mysql> insert into mmm_test (id,email) values(186,"[email protected]");
Query OK, 1 row affected (0.00 sec)
mysql> select * from mmm_test;
+------+----------------+
| id | email |
+------+----------------+
| 186 | [email protected] |
+------+----------------+
1 row in set (0.00 sec)
此時可以登錄Master2節點、slave1節點,slave2節點,查看數據是否同步。
mysql> show databases;
mysql> use repldb;
mysql> show tables;
mysql> select * from mmm_test;
發現數據已在Master2節點、slave1節點,slave2節點上全部同步。
-------------------------------------------
接着仍在mysql遠程客戶端上通過MMM提供的只讀VIP登錄MYSQL集羣,
[root@mysql01 ~]# mysql -uroot -pjzh0024 -h 10.24.24.32
mysql> show variables like "%hostname%";
+---------------+--------+
| Variable_name | Value |
+---------------+--------+
| hostname | Slave1 |
+---------------+--------+
1 row in set (0.00 sec)
mysql> use repldb;
mysql> create table mmm_test1(id int,email varchar(100));
未測試通,有問題,可以寫入數據;
-------------------------------------------
2. 故障轉移功能測試
先檢查MMM目前的集羣運行狀態
[root@Monitor mysql-mmm]# mmm_control show
db1(10.24.24.20) master/ONLINE. Roles: reader(10.24.24.33), writer(10.24.24.30)
db2(10.24.24.21) master/ONLINE. Roles: reader(10.24.24.34)
db3(10.24.24.22) slave/ONLINE. Roles: reader(10.24.24.31)
db4(10.24.24.23) slave/ONLINE. Roles: reader(10.24.24.32)
關閉Master1節點的mysql服務,在查看MMM集羣運行狀態
http://ywliyq.blog.51cto.com/11433965/1856974