MasterFailover (Non-GTID)
MHA::MasterFailover::main()->do_master_failover
failover_non_gtid
Phase 1: Configuration Check Phase
- init_config(): 初始化配置
- MHA::ServerManager::init_binlog_server: 初始化binlog server
- check_settings()
a. check_node_version(): 查看MHA的版本
b. connect_all_and_read_server_status(): 檢測確認各個Node節點MySQL是否可以連接
c. get_dead_servers(),get_alive_servers(),get_alive_slaves():再次檢測一次node節點的狀態
d. print_dead_servers(): 是否掛掉的master是否是當前的master
e. MHA::DBHelper::check_connection_fast_util : 快速判斷dead server,是否真的掛了,如果ping_type=insert,不會double check
f. MHA::NodeUtil::drop_file_if($_failover_error_file|$_failover_complete_file): 檢測上次的failover文件
g. 如果上次failover的時間在8小時以內,那麼這次就不會failover,除非配置了額外的參數
h. start_sql_threads_if(): 查看所有slave的Slave_SQL_Running是否爲Yes,若不是則啓動SQL thread
- is_gtid_auto_pos_enabled(): 判斷是否是GTID模式
Phase 2: Dead Master Shutdown Phase..
- force_shutdown($dead_master):
a. stop_io_thread(): stop所有slave的IO_thread
b. force_shutdown_internal($dead_master):
b_1. master_ip_failover_script: 如果有這個腳本,則執行裏面的邏輯(比如:切換vip)
b_2. shutdown_script:如果有這個腳本,則執行裏面的邏輯(比如:Power off 服務器)
Phase 3: Master Recovery Phase..
- Phase 3.1: Getting Latest Slaves Phase..
* check_set_latest_slaves()
a. read_slave_status(): 獲取所有show slave status 信息
b. identify_latest_slaves(): 找到最新的slave是哪個
c. identify_oldest_slaves(): 找到最老的slave是哪個
- Phase 3.2: Saving Dead Master’s Binlog Phase..
* save_master_binlog($dead_master);
-> 如果dead master可以ssh,那麼
b_1_1. save_master_binlog_internal: 用node節點save_binary_logs腳本拷貝相應binlog到manager
diff_binary_log 生產差異binlog日誌
b_1_2. file_copy: 將差異binlog拷貝到manager節點的 manager_workdir目錄下
-> 如果dead master不可以ssh
b_1_3. 那麼差異日誌就會丟失
- Phase 3.3: Determining New Master Phase..
b. 如果GTID auto_pos沒有打開,調用find_latest_base_slave()
b_1. find_latest_base_slave_internal: 尋找擁有所有relay-log的最新slave,如果沒有,則failover失敗
b_1_1. find_slave_with_all_relay_logs:
b_1_1_1. apply_diff_relay_logs: 查看最新的slave是否有其他slave缺失的relay-log
c. select_new_master: 選舉new master
c_1. MHA::ServerManager::select_new_master:
#If preferred node is specified, one of active preferred nodes will be new master.
#If the latest server behinds too much (i.e. stopping sql thread for online backups), we should not use it as a new master, but we should fetch relay log there
#Even though preferred master is configured, it does not become a master if it's far behind
get_candidate_masters(): 獲取配置中候選節點
get_bad_candidate_masters(): 以下條件不能成爲候選master
# dead server
# no_master >= 1
# log_bin=0
# oldest_major_version=0
# check_slave_delay: 檢查是否延遲非常厲害(可以通過設置no_check_delay忽略)
{Exec_Master_Log_Pos} + 100000000 只要binlog position不超過100000000 就行
選舉流程: 先看candidate_master,然後找 latest slave, 然後再隨機挑選
- Phase 3.3(3.4): New Master Diff Log Generation Phase..
* recover_master_internal
recover_relay_logs:
判斷new master是否爲最新的slave,如果不是,則生產差異relay logs,併發送給新master
recover_master_internal:
將之前生產的dead master上的binlog傳送給new master
- Phase 3.4: Master Log Apply Phase..
* apply_diff:
a. wait_until_relay_log_applied: 直到new master完成所有relay log,否則一直等待
b. 判斷Exec_Master_Log_Pos == Read_Master_Log_Pos, 如果不等,那麼生產差異日誌:
save_binary_logs --command=save
c. apply_diff_relay_logs --command=apply:對new master進行恢復
c_1. exec_diff:Exec_Master_Log_Pos和Read_Master_Log_Pos的差異日誌
c_2. read_diff:new master與lastest slave的relay log的差異日誌
c_3. binlog_diff:lastest slave與daed master之間的binlog差異日誌
* 如果設置了master_ip_failover_script腳本,那麼會執行這裏面的腳本(一般用來漂移vip)
* disable_read_only(): 允許new master可寫
Phase 4: Slaves Recovery Phase..
recover_slaves_internal
- Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
recover_all_slaves_relay_logs: 生成Slave與New Slave之間的差異日誌,並將該日誌拷貝到各Slave的工作目錄下
- Phase 4.2: Starting Parallel Slave Log Apply Phase..
* recover_slave:
對每個slave進行恢復,跟以上Phase 3.4: Master Log Apply Phase中的 apply_diff一樣
* change_master_and_start_slave:
重新指向到new master,並且start slave
Phase 5: New master cleanup phase..
- reset_slave_on_new_master
在new master上執行reset slave all;
MasterFailover (GTID)
failover_gitd
Phase 1: Configuration Check Phase
- init_config(): 初始化配置
- MHA::ServerManager::init_binlog_server: 初始化binlog server
- check_settings()
a. check_node_version(): 查看MHA的版本
b. connect_all_and_read_server_status(): 檢測確認各個Node節點MySQL是否可以連接
c. get_dead_servers(),get_alive_servers(),get_alive_slaves():再次檢測一次node節點的狀態
d. print_dead_servers(): 是否掛掉的master是否是當前的master
e. MHA::DBHelper::check_connection_fast_util : 快速判斷dead server,是否真的掛了,如果ping_type=insert,不會double check
f. MHA::NodeUtil::drop_file_if($_failover_error_file|$_failover_complete_file): 檢測上次的failover文件
g. 如果上次failover的時間在8小時以內,那麼這次就不會failover,除非配置了額外的參數
h. start_sql_threads_if(): 查看所有slave的Slave_SQL_Running是否爲Yes,若不是則啓動SQL thread
- is_gtid_auto_pos_enabled(): 判斷是否是GTID模式
Phase 2: Dead Master Shutdown Phase completed.
- force_shutdown($dead_master):
a. stop_io_thread(): stop所有slave的IO_thread
b. force_shutdown_internal($dead_master):
b_1. master_ip_failover_script: 如果有這個腳本,則執行裏面的邏輯(比如:切換vip)
b_2. shutdown_script:如果有這個腳本,則執行裏面的邏輯(比如:Power off 服務器)
Phase 3: Master Recovery Phase..
- Phase 3.1: Getting Latest Slaves Phase..
* check_set_latest_slaves()
a. read_slave_status(): 獲取所有show slave status 信息
b. identify_latest_slaves(): 找到最新的slave是哪個
c. identify_oldest_slaves(): 找到最老的slave是哪個
Phase 3.2: Saving Dead Master’s Binlog Phase.. (GTID 模式下沒有這一步)
Phase 3.3: Determining New Master Phase..
get_most_advanced_latest_slave(): 獲取最新的slave
c. select_new_master: 選舉new master
c_1. MHA::ServerManager::select_new_master:
#If preferred node is specified, one of active preferred nodes will be new master.
#If the latest server behinds too much (i.e. stopping sql thread for online backups), we should not use it as a new master, but we should fetch relay log there
#Even though preferred master is configured, it does not become a master if it's far behind
get_candidate_masters(): 獲取配置中候選節點
get_bad_candidate_masters(): 以下條件不能成爲候選master
# dead server
# no_master >= 1
# log_bin=0
# oldest_major_version=0
# check_slave_delay: 檢查是否延遲非常厲害(可以通過設置no_check_delay忽略)
{Exec_Master_Log_Pos} + 100000000 只要binlog position不超過100000000 就行
選舉流程: 先看candidate_master,然後找 latest slave, 然後再隨機挑選
- Phase 3.3: New Master Recovery Phase..
* recover_master_gtid_internal:
wait_until_relay_log_applied: 候選master等待所有relay-log都應用完
如果候選master不是最新的slave:
$latest_slave->wait_until_relay_log_applied($log): 最新的slave應用完所有的relay-log
change_master_and_start_slave : 讓候選master同步到latest master,追上latest slave
獲取候選master此時此刻的日誌信息,以便後面切換
如果候選master是最新的slave:
獲取候選master此時此刻的日誌信息,以便後面切換
save_from_binlog_server:
如果配置了binlog server,那麼在binlogsever 能連的情況下,將binlog 拷貝到Manager,並生成差異日誌diff_binlog(save_binary_logs --command=save)
apply_binlog_to_master:
Applying differential binlog: 應用差異的binlog到new master
Phase 4: Slaves Recovery Phase..
- Phase 4.1: Starting Slaves in parallel..
* recover_slaves_gtid_internal:
change_master_and_start_slave: 因爲master已經恢復,那麼slave直接change master auto_pos=1 的模式就可以恢復
gtid_wait:用此等待同步全部追上
Phase 5: New master cleanup phase..
- reset_slave_on_new_master
在new master上執行reset slave all;
MasterRotate (Non-GTID)
Phase 1: Configuration Check Phase
- do_master_online_switch
- identify_orig_master
* read_config():
Reading default configuration from /etc/masterha_default.cnf..
Reading application default configuration from /etc/app1.cnf..
Reading server configuration from /etc/app1.cnf..
* connect_all_and_read_server_status:
connect_check: 首先進行connect check,確保各個server的MySQL服務都正常
connect_and_get_status:
獲取MySQL實例的server_id/mysql_version/log_bin..等信息
通過執行show slave status,獲取當前的master節點。如果輸出爲空,說明當前節點是master節點( 0.56已經不是這麼判斷了,已經支持multi master)
validate_current_master:取得master節點的信息,並判斷配置的正確性
check是否有server down,若有則退出rotate
check master alive or not,若dead則退出rotate
check_repl_priv:
查看用戶是否有replication的權限
獲取monitor_advisory_lock,以保證當前沒有其他的monitor進程在master上運行
執行:SELECT GET_LOCK('MHA_Master_High_Availability_Monitor', ?) AS Value
獲取failover_advisory_lock,以保證當前沒有其他的failover進程在slave上運行
執行:SELECT GET_LOCK('MHA_Master_High_Availability_Failover', ?) AS Value
check_replication_health:
執行:SHOW SLAVE STATUS來判斷如下狀態:current_slave_position/has_replication_problem
其中,has_replication_problem具體check如下內容:IO線程/SQL線程/Seconds_Behind_Master(1s)
get_running_update_threads:
使用show processlist來查詢當前有沒有執行update的線程存在,若有則退出switch
$self->validate_current_master():
檢查是否是GTID模式
- identify_new_master
set_latest_slaves:當前的slave節點都是latest slave
select_new_master: 選舉new master
c_1. MHA::ServerManager::select_new_master:
#If preferred node is specified, one of active preferred nodes will be new master.
#If the latest server behinds too much (i.e. stopping sql thread for online backups), we should not use it as a new master, but we should fetch relay log there
#Even though preferred master is configured, it does not become a master if it's far behind
get_candidate_masters(): 獲取配置中候選節點
get_bad_candidate_masters(): 以下條件不能成爲候選master
# dead server
# no_master >= 1
# log_bin=0
# oldest_major_version=0
# check_slave_delay: 檢查是否延遲非常厲害(可以通過設置no_check_delay忽略)
{Exec_Master_Log_Pos} + 100000000 只要binlog position不超過100000000 就行
選舉流程: 先看candidate_master,然後找 latest slave, 然後再隨機挑選
Phase 2: Rejecting updates Phase
- reject_update
* lock table來reject write binlog
調用master_ip_online_change_script --command=stop
如果MHA的配置文件中設置了"master_ip_online_change_script"參數,則執行該腳本來disable writes on the current master
該腳本在使用了vip的時候,在origin master上刪除vip{可選}
reconnect:確保當前與master的連接正常
lock_all_tables:執行FLUSH TABLES WITH READ LOCK,來lock table
check_binlog_stop:連續兩次show master status,來判斷寫binlog是否已經停止
- read_slave_status
get_alive_slaves:
check_slave_status:調用"SHOW SLAVE STATUS"來取得slave的信息:
- switch_master
switch_master_internal:
master_pos_wait:調用select master_pos_wait函數,等待主從同步完成
get_new_master_binlog_position:通過'show master status'來獲取
Allow write access on the new master:
調用master_ip_online_change_script --command=start ...,將vip指向new master
disable_read_only:
在新master上執行:SET GLOBAL read_only=0
- switch_slaves
switch_slaves_internal:
change_master_and_start_slave
change_master:
start_slave:
unlock_tables:在orig master上執行unlock table
Phase 5: New master cleanup phase
- reset_slave_on_new_master
- release_failover_advisory_lock
MasterRotate (GTID)
GTID模式的online switch 和 non-GTID 流程一樣,除了在change_master_and_start_slave 不一樣之外
GTID的小問題
今天測試了一把GTID的在線切換,遇到的問題非常詭異
- 問題:新搭建了一組group,做MHA在線切換,結果卻導致環境混亂。
命令:masterha_master_switch --master_state=alive --conf=/etc/app1.cnf --orig_master_is_new_slave --interactive=0
A(master),B(candidate master),C 一組複製環境,執行在線切換後,C竟然還同步在A上
B(master)
-> A(slave) -> C(slave) --錯誤案例
正確的結果應該是:
B(master)
-> A(slave)
-> C(slave)
看了MHA的切換日誌,都是正常的,Switching master to xx completed successfully.
這樣只能翻翻源碼,果然,很快就找到了問題所在:
MHA::ServerManager->is_gtid_auto_pos_enabled->get_gtid_status()
sub get_gtid_status($) {
my $self = shift;
my @servers = $self->get_alive_servers();
my @slaves = $self->get_alive_slaves();
return 0 if ( $#servers < 0 );
foreach (@servers) {
return 0 unless ( $_->{has_gtid} );
}
foreach (@slaves) {
return 0 unless ( $_->{Executed_Gtid_Set} ); -- 如果show slave status中沒有執行過任何Executed_Gtid_Set,那麼會認爲是非GTID模式
}
foreach (@slaves) {
return 1
if ( defined( $_->{Auto_Position} )
&& $_->{Auto_Position} == 1 );
return 1 if ( $_->{use_gtid_auto_pos} );
}
return 2;
}
* 解決方案也很簡單:
1)因爲沒有執行過任何事物,那就執行一條唄
2)修改源碼,將這一步驗證獨釣即可。
實時證明,以上兩種都可以驗證通過
重點需要注意的地方
- mha with not binlog server
* 根據上述源碼分析得到,如果沒有配置binlog server的 GTID 模式failover,會導致數據丟失,即使master ssh可達
* 通過測試,的確在old master SSH可達的情況下,它也不會去save binlog,所以GTID和non-GTID模式的區別比較大。
* 解決的方案就是: 配置Binlog Server
/etc/app1.cnf
[server default]
remote_workdir=/var/log/masterha/app1
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/app1.log
[server1]
hostname=host1
candidate_master=1
check_repl_delay=0
[server2]
hostname=host2
candidate_master=1
check_repl_delay=0
[server3]
hostname=host3
no_master=1
check_repl_delay=0
[binlog1]
hostname=host1 --注意:這裏既可以設置master爲binlog server,也可以設置其他專用的binlog server
- 常用配置
/etc/app1.cnf
[server default]
remote_workdir=/var/log/masterha/app1
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/app1.log
[server1]
hostname=host1
candidate_master=1 --表示候選 master
check_repl_delay=0 --當 slave 有延遲的時候,如果沒有這個參數,會失敗
[server2]
hostname=host2
candidate_master=1
check_repl_delay=0
[server3]
hostname=host3
no_master=1 --永遠不會成爲new master
check_repl_delay=0
ignore_fail=1 -- 如果不設置爲1,那麼如果server3有問題,切換會失敗
[binlog1]
hostname=host1 --注意:這裏既可以設置master爲binlog server,也可以設置其他專用的binlog server
- 常用命令
masterha_check_repl --conf=/etc/app1.cnf
masterha_check_status --conf=/etc/app1.cnf
masterha_stop --conf=/etc/app1.cnf
masterha_ssh_check --conf=/etc/app1.cnf
masterha_master_switch --master_state=dead --conf=/etc/app1.cnf --dead_master_host=host_1 --interactive=1 --ignore_last_failover
masterha_master_switch --master_state=dead --conf=/etc/app1.cnf --dead_master_host=host_1 --interactive=0 --ignore_last_failover
masterha_master_switch --master_state=alive --conf=/etc/app1.cnf --orig_master_is_new_slave --interactive=0 --running_updates_limit=10000
masterha_master_switch --master_state=dead --conf=/etc/app1.cnf --dead_master_host=host_1 --interactive=1 --ignore_last_failover --new_master_host=host_2
nohup masterha_manager --conf=/etc/app1.cnf --last_failover_minute=1 --ignore_last_failover &
原文地址:MySQL Master High Available 源碼篇
https://yq.aliyun.com/articles/59233?spm=5176.100239.blogcont58920.12.wGfswA