MYSQL MHA高可用方案安裝與維護

1.安裝部署MHA

角色

IP地址

主機名

ServerID

類型

Master

172.19.0.171

MHA1

1

寫入

Candicate master

172.19.0.172

MHA2

2

Slave

172.19.0.173

MHA3

3

Monitor host

172.19.0.174

MHA4

 

監控集羣組

 

增加4/etc/hosts

 

[root@MHA1 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1          localhost.localdomain localhost

::1              localhost6.localdomain6 localhost6

172.19.0.171 MHA1

172.19.0.172 MHA2

172.19.0.173 MHA3

172.19.0.174 MHA4

其中master對外提供服務,備選master提供讀服務,slave 也提供相關的讀服務,一旦master宕機,將會把務選master提升爲新的master,slave 指向新的master.

2.首先在 172.19.0.171, 172.19.0.172, 172.19.0.173三臺安裝好mysql軟件。

3.搭建主從複製環境

1)在172.19.0.171上創建複製用戶:

mysql> grant replication slave on *.* to 'repl'@'172.19.0.%' identified by '123456';

Query OK, 0 rows affected (0.00 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000002 |      258 |              |                  |

+------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

 

mysql>

 

2)在172.19.0.172172.19.0.173兩臺搭建備庫

mysql> change master to master_host='172.19.0.171',master_user='repl',master_password='123456',master_log_file='mysql-bin.000002',master_log_pos=258;

Query OK, 0 rows affected (0.06 sec)

 

mysql> start slave;

Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G;

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 172.19.0.171

                  Master_User: repl

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 258

               Relay_Log_File: MHA2-relay-bin.000002

                Relay_Log_Pos: 253

        Relay_Master_Log_File: mysql-bin.000002

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 258

              Relay_Log_Space: 408

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

ERROR:

No query specified

以上說明同步成功了。

3slave服務器(172.19.0.172,172.19.0.173)設置read only;

mysql> set global read_only=1;

4)設置relay log清除方式(在每個slave 下)

mysql> set global relay_log_purge=0;

(5)創建監控用戶,172.19.0.171上執行

mysql> grant all privileges on *.* to 'root'@'172.19.0.%' identified by '123456';

 

6)在172.19.0.172上創建複製用戶:

mysql> grant replication slave on *.* to 'repl'@'172.19.0.%' identified by '123456';

 

在每臺做以下步驟

ln -s /opt/mysql/bin/* /usr/local/bin/

 

4.配置ssh登錄無密碼驗證

1)在172.19.0.171配置

[root@MHA1 ~]# ssh-keygen -t rsa

[root@MHA1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

2)在172.19.0.172配置

[root@MHA2 ~]# ssh-keygen -t rsa

[root@MHA2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

3)在172.19.0.173配置

[root@MHA3 ~]# ssh-keygen -t rsa

[root@MHA3 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA3 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA3 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

 

4)在172.19.0.174配置

[root@MHA4 ~]# ssh-keygen -t rsa

[root@MHA4 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA4 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

[root@MHA4 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

 

 

5.安裝MHA node(在所有的MYSQL服務上安裝)

1)在172.19.0.171, 172.19.0.172, 172.19.0.173安裝perl模塊(DBD::mysql)

安裝腳本

 

vi  nodeinstall.sh

 

#!/bin/bash

wget http://xrl.us/cpanm --no-check-certificate

mv cpanm /usr/bin/

chmod 755 /usr/bin/cpanm

cat >/root/list<< EOF

install DBD::mysql

EOF

for package in `cat /root/list`

do

cpanm $package

done

yum -y install perl-DBD-MySQL  ncftp

 

2)在所有的節點上安裝mha node:

[root@MHA1 ~]# tar zxvf mha4mysql-node-0.53.tar.gz

[root@MHA1 ~]# cd mha4mysql-node-0.53

[root@MHA1 mha4mysql-node-0.53]# perl Makefile.PL

[root@MHA1 mha4mysql-node-0.53]# make

[root@MHA1 mha4mysql-node-0.53]# make install

                                                       

6.安裝MHA Manger

1)在172.19.0.174上安裝 MHA node軟件包

vi  nodeinstall.sh

 

#!/bin/bash

wget http://xrl.us/cpanm --no-check-certificate

mv cpanm /usr/bin/

chmod 755 /usr/bin/cpanm

cat >/root/list<< EOF

install DBD::mysql

EOF

for package in `cat /root/list`

do

cpanm $package

done

 

yum -y install perl-DBD-MySQL  ncftp

安裝mha node軟件包:

[root@MHA4 ~]# tar zxvf mha4mysql-node-0.53.tar.gz

[root@MHA4 ~]# cd mha4mysql-node-0.53

[root@MHA4 mha4mysql-node-0.53]# perl Makefile.PL

[root@MHA4 mha4mysql-node-0.53]# make

[root@MHA4 mha4mysql-node-0.53]# make install

 

2)安裝MHA Manager軟件包

[root@MHA4 ~]# vi managerinstall.sh

 

#!/bin/bash

wget http://xrl.us/cpanm --no-check-certificate

mv cpanm /usr/bin/

chmod 755 /usr/bin/cpanm

cat >/root/list<< EOF

install DBD::mysql

install Config::Tiny

install Log::Dispatch

install Parallel::ForkManager

install Time::HiRes

EOF

for package in `cat /root/list`

do

cpanm $package

done

 

yum -y install perl-Config-Tiny perl-Params-Validate perl-Log-Dispatch perl-Parallel-ForkManager

yum  -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles  ncftp perl-Params-Validate  perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel

 

yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

 

[root@MHA4 ~]# tar zxvf mha4mysql-manager-0.53.tar.gz

[root@MHA4 ~]# cd mha4mysql-manager-0.53

[root@MHA4 mha4mysql-manager-0.53]# perl Makefile.PL

[root@MHA4 mha4mysql-manager-0.53]# make

[root@MHA4 mha4mysql-manager-0.53]#make install

 

7.配置MHA

1)創建MHA工作目錄,並且創建相關配置文件

mkdir -p /etc/masterha/

mkdir -p /masterha/app1

vi /etc/masterha/app1.cnf

 

[server default]

manager_workdir=/masterha/app1

manager_log=/masterha/app1/app1.log

master_ip_failover_script=/usr/local/bin/master_ip_failover

master_ip_online_change_script=/usr/local/bin/master_ip_online_change

user=root

password=123456

ssh_user=root

repl_user=repl

repl_password=123456

ping_interval=1

remote_workdir=/tmp

report_script=/usr/local/bin/send_report

secondary_check_script=/usr/bin/masterha_secondary_check  -s MHA2 -s MHA1 --user=root --master_host=MHA1 --master_ip=172.19.0.171 --master_port=3306 --password=123456

shutdown_script=""

report_script=""

 

[server1]

hostname=172.19.0.171

master_binlog_dir=/opt/mysql/data

candidate_master=1

 [server2]

hostname=172.19.0.172

master_binlog_dir=/opt/mysql/data

candidate_master=1

check_repl_delay=0

 

[server3]

hostname=172.19.0.173

master_binlog_dir=/opt/mysql/data

no_master=1

 

2)把腳本拷貝相關目錄

[root@MHA4 ~]# cp /root/mha4mysql-manager-0.53/samples/scripts/master_ip_failover /usr/local/bin/

 

[root@MHA4 ~]# cp /root/mha4mysql-manager-0.53/samples/scripts/master_ip_online_change /usr/local/bin/

 

[root@MHA4~]#cp /root/mha4mysql-manager-0.53/samples/scripts/send_report  /usr/local/bin/

 

[root@MHA4~]# cp /root/mha4mysql-manager-0.53/bin/masterha_secondary_check /usr/bin/

在四臺都yum -y install perl-DBD-MySQL  ncftp  防止報錯

8.檢查SSH的配置

[root@MHA4 scripts]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

Wed Aug 27 09:09:28 2014 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Wed Aug 27 09:09:28 2014 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Wed Aug 27 09:09:28 2014 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Wed Aug 27 09:09:28 2014 - [info] Starting SSH connection tests..

Wed Aug 27 09:09:29 2014 - [debug]

Wed Aug 27 09:09:28 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.171:22) to [email protected](172.19.0.172:22)..

Wed Aug 27 09:09:28 2014 - [debug]   ok.

Wed Aug 27 09:09:28 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.171:22) to [email protected](172.19.0.173:22)..

Wed Aug 27 09:09:29 2014 - [debug]   ok.

Wed Aug 27 09:09:29 2014 - [debug]

Wed Aug 27 09:09:28 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.172:22) to [email protected](172.19.0.171:22)..

Wed Aug 27 09:09:29 2014 - [debug]   ok.

Wed Aug 27 09:09:29 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.172:22) to [email protected](172.19.0.173:22)..

Wed Aug 27 09:09:29 2014 - [debug]   ok.

Wed Aug 27 09:09:29 2014 - [debug]

Wed Aug 27 09:09:29 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.173:22) to [email protected](172.19.0.171:22)..

Wed Aug 27 09:09:29 2014 - [debug]   ok.

Wed Aug 27 09:09:29 2014 - [debug]  Connecting via SSH from [email protected](172.19.0.173:22) to [email protected](172.19.0.172:22)..

Wed Aug 27 09:09:29 2014 - [debug]   ok.

Wed Aug 27 09:09:29 2014 - [info] All SSH connection tests passed successfully.

以上說明成功

9.檢查整個複製環境狀況

通過masterha_check_repl腳本查看整個集羣的狀態

 [root@MHA4 ~]#  masterha_check_repl --conf=/etc/masterha/app1.cnf

Wed Aug 27 11:52:49 2014 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Wed Aug 27 11:52:49 2014 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Wed Aug 27 11:52:49 2014 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Wed Aug 27 11:52:49 2014 - [info] MHA::MasterMonitor version 0.53.

Wed Aug 27 11:52:50 2014 - [info] Dead Servers:

Wed Aug 27 11:52:50 2014 - [info] Alive Servers:

Wed Aug 27 11:52:50 2014 - [info]   172.19.0.171(172.19.0.171:3306)

Wed Aug 27 11:52:50 2014 - [info]   172.19.0.172(172.19.0.172:3306)

Wed Aug 27 11:52:50 2014 - [info]   172.19.0.173(172.19.0.173:3306)

Wed Aug 27 11:52:50 2014 - [info] Alive Slaves:

Wed Aug 27 11:52:50 2014 - [info]   172.19.0.172(172.19.0.172:3306)  Version=5.5.25-log (oldest major version between slaves) log-bin:enabled

Wed Aug 27 11:52:50 2014 - [info]     Replicating from 172.19.0.171(172.19.0.171:3306)

Wed Aug 27 11:52:50 2014 - [info]     Primary candidate for the new Master (candidate_master is set)

Wed Aug 27 11:52:50 2014 - [info]   172.19.0.173(172.19.0.173:3306)  Version=5.5.25-log (oldest major version between slaves) log-bin:enabled

Wed Aug 27 11:52:50 2014 - [info]     Replicating from 172.19.0.171(172.19.0.171:3306)

Wed Aug 27 11:52:50 2014 - [info]     Not candidate for the new Master (no_master is set)

Wed Aug 27 11:52:50 2014 - [info] Current Alive Master: 172.19.0.171(172.19.0.171:3306)

Wed Aug 27 11:52:50 2014 - [info] Checking slave configurations..

Wed Aug 27 11:52:50 2014 - [info]  read_only=1 is not set on slave 172.19.0.172(172.19.0.172:3306).

Wed Aug 27 11:52:50 2014 - [warning]  relay_log_purge=0 is not set on slave 172.19.0.172(172.19.0.172:3306).

Wed Aug 27 11:52:50 2014 - [info]  read_only=1 is not set on slave 172.19.0.173(172.19.0.173:3306).

Wed Aug 27 11:52:50 2014 - [warning]  relay_log_purge=0 is not set on slave 172.19.0.173(172.19.0.173:3306).

Wed Aug 27 11:52:50 2014 - [info] Checking replication filtering settings..

Wed Aug 27 11:52:50 2014 - [info]  binlog_do_db= , binlog_ignore_db=

Wed Aug 27 11:52:50 2014 - [info]  Replication filtering check ok.

Wed Aug 27 11:52:50 2014 - [info] Starting SSH connection tests..

Wed Aug 27 11:52:51 2014 - [info] All SSH connection tests passed successfully.

Wed Aug 27 11:52:51 2014 - [info] Checking MHA Node version..

Wed Aug 27 11:52:52 2014 - [info]  Version check ok.

Wed Aug 27 11:52:52 2014 - [info] Checking SSH publickey authentication settings on the current master..

Wed Aug 27 11:52:52 2014 - [info] HealthCheck: SSH to 172.19.0.171 is reachable.

Wed Aug 27 11:52:52 2014 - [info] Master MHA Node version is 0.53.

Wed Aug 27 11:52:52 2014 - [info] Checking recovery script configurations on the current master..

Wed Aug 27 11:52:52 2014 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/opt/mysql/data --output_file=/var/tmp/save_binary_logs_test --manager_version=0.53 --start_file=mysql-bin.000002

Wed Aug 27 11:52:52 2014 - [info]   Connecting to [email protected](172.19.0.171)..

  Creating /var/tmp if not exists..    ok.

  Checking output directory is accessible or not..

   ok.

  Binlog found at /opt/mysql/data, up to mysql-bin.000002

Wed Aug 27 11:52:52 2014 - [info] Master setting check done.

Wed Aug 27 11:52:52 2014 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..

Wed Aug 27 11:52:52 2014 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=root --slave_host=172.19.0.172 --slave_ip=172.19.0.172 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.25-log --manager_version=0.53 --relay_log_info=/opt/mysql/data/relay-log.info  --relay_dir=/opt/mysql/data/  --slave_pass=xxx

Wed Aug 27 11:52:52 2014 - [info]   Connecting to [email protected](172.19.0.172:22)..

  Checking slave recovery environment settings..

    Opening /opt/mysql/data/relay-log.info ... ok.

    Relay log found at /opt/mysql/data, up to MHA2-relay-bin.000002

    Temporary relay log file is /opt/mysql/data/MHA2-relay-bin.000002

    Testing mysql connection and privileges.. done.

    Testing mysqlbinlog output.. done.

    Cleaning up test file(s).. done.

Wed Aug 27 11:52:53 2014 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user=root --slave_host=172.19.0.173 --slave_ip=172.19.0.173 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.25-log --manager_version=0.53 --relay_log_info=/opt/mysql/data/relay-log.info  --relay_dir=/opt/mysql/data/  --slave_pass=xxx

Wed Aug 27 11:52:53 2014 - [info]   Connecting to [email protected](172.19.0.173:22)..

  Checking slave recovery environment settings..

    Opening /opt/mysql/data/relay-log.info ... ok.

    Relay log found at /opt/mysql/data, up to MHA3-relay-bin.000002

    Temporary relay log file is /opt/mysql/data/MHA3-relay-bin.000002

    Testing mysql connection and privileges.. done.

    Testing mysqlbinlog output.. done.

    Cleaning up test file(s).. done.

Wed Aug 27 11:52:53 2014 - [info] Slaves settings check done.

Wed Aug 27 11:52:53 2014 - [info]

172.19.0.171 (current master)

 +--172.19.0.172

 +--172.19.0.173

 

Wed Aug 27 11:52:53 2014 - [info] Checking replication health on 172.19.0.172..

Wed Aug 27 11:52:53 2014 - [info]  ok.

Wed Aug 27 11:52:53 2014 - [info] Checking replication health on 172.19.0.173..

Wed Aug 27 11:52:53 2014 - [info]  ok.

Wed Aug 27 11:52:53 2014 - [warning] master_ip_failover_script is not defined.

Wed Aug 27 11:52:53 2014 - [warning] shutdown_script is not defined.

Wed Aug 27 11:52:53 2014 - [info] Got exit code 0 (Not master dead).

 

MySQL Replication Health is OK.

 

說明成功

 

 

10.檢查MHA Manager的狀態

通過masterha_check_status腳本查看manager的狀態

[root@MHA4 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf

app1 is stopped(2:NOT_RUNNING).

上面說明MHA監控沒有開啓

 

11.開啓MHA Manager監控

[root@MHA4 ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf > /masterha/app1/manager.log  </dev/null 2>&1 &

 

查看MHA Manager監控是否正常

[root@MHA4 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf

app1 (pid:9648) is running(0:PING_OK), master:172.19.0.171

 

 

12.VIP配置

1)下載集羣心跳軟件keepalived,並進行安裝

yum -y install   openssl-devel

yum -y install popt-devel

 

wget http://www.keepalived.org/software/keepalived-1.2.1.tar.gz

tar zxvf keepalived-1.2.1.tar.gz

cd keepalived-1.2.1

./configure --prefix=/usr/local/keeplived --with-kernel-dir=/usr/src/kernels/2.6.32-431.23.3.el6.x86_64/

 

make && make install

cp /usr/local/keeplived/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/

cp /usr/local/keeplived/etc/sysconfig/keepalived  /etc/sysconfig/

mkdir /etc/keepalived

cp /usr/local/keeplived/etc/keepalived/keepalived.conf /etc/keepalived/

cp /usr/local/keeplived/sbin/keepalived /usr/sbin/

 

(2)配置keepalived的配置文件

172.19.0.171上設置

! Configuration File for keepalived

 

global_defs {

   notification_email {

     [email protected]

   }

   notification_email_from [email protected]

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id MYSQL-ha

}

 

vrrp_instance VI_1 {

    state BACKUP

    interface eth0

    virtual_router_id 51

    priority 150

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        172.19.0.201/24

    }

}

 

172.19.0.172上設置

 

! Configuration File for keepalived

 

global_defs {

   notification_email {

     [email protected]

   }

   notification_email_from [email protected]

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id MYSQL-ha

}

 

vrrp_instance VI_1 {

    state BACKUP

    interface eth0

    virtual_router_id 51

    priority 120

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        172.19.0.201/24

    }

}

 

 

啓動keepalived

service keepalived start

通過腳本的方法管理VIP

[root@MHA4 ~]# cat  /usr/local/bin/master_ip_failover

#!/usr/bin/env perl

 

#  Copyright (C) 2011 DeNA Co.,Ltd.

#

#  This program is free software; you can redistribute it and/or modify

#  it under the terms of the GNU General Public License as published by

#  the Free Software Foundation; either version 2 of the License, or

#  (at your option) any later version.

#

#  This program is distributed in the hope that it will be useful,

#  but WITHOUT ANY WARRANTY; without even the implied warranty of

#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

#  GNU General Public License for more details.

#

#  You should have received a copy of the GNU General Public License

#   along with this program; if not, write to the Free Software

#  Foundation, Inc.,

#  51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA

 

## Note: This is a sample script and is not complete. Modify the script based on your environment.

 

use strict;

use warnings FATAL => 'all';

 

use Getopt::Long;

 

my (

  $command,          $ssh_user,        $orig_master_host, $orig_master_ip,

  $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port

);

 

my $vip='172.19.0.201/24';

my $key="2";

my $ssh_start_vip ="/sbin/ifconfig eth0:$key $vip";

my $ssh_stop_vip="/sbin/ifconfig eth0:$key down";

 

GetOptions(

  'command=s'          => \$command,

  'ssh_user=s'         => \$ssh_user,

  'orig_master_host=s' => \$orig_master_host,

  'orig_master_ip=s'   => \$orig_master_ip,

  'orig_master_port=i' => \$orig_master_port,

  'new_master_host=s'  => \$new_master_host,

  'new_master_ip=s'    => \$new_master_ip,

  'new_master_port=i'  => \$new_master_port,

);

 

exit &main();

 

sub main {

  if ( $command eq "stop" || $command eq "stopssh" ) {

 

    # $orig_master_host, $orig_master_ip, $orig_master_port are passed.

    # If you manage master ip address at global catalog database,

    # invalidate orig_master_ip here.

    my $exit_code = 1;

    eval {

     

      print "Disabling the VIP on old master: $orig_master_host \n";

         &stop_vip();

      $exit_code = 0;

    };

    if ($@) {

      warn "Got Error: $@\n";

      exit $exit_code;

    }

    exit $exit_code;

  }

  elsif ( $command eq "start" ) {

 

    # all arguments are passed.

    # If you manage master ip address at global catalog database,

    # activate new_master_ip here.

    # You can also grant write access (create user, set read_only=0, etc) here.

    my $exit_code = 10;

    eval {

         print "Enabling the VIP - $vip on the new master - $new_master_host \n";

         &start_vip();

      $exit_code = 0;

    };

    if ($@) {

      warn $@;

 

      # If you want to continue failover, exit 10.

      exit $exit_code;

    }

    exit $exit_code;

  }

  elsif ( $command eq "status" ) {

    print "Checking the Status of the script.. ok \n";

    # do nothing

    exit 0;

  }

  else {

    &usage();

    exit 1;

  }

}

 

sub start_vip(){

         `ssh $ssh_user\@$new_master_host \ " $ssh_start_vip \"`;

}

 

sub stop_vip(){

        `ssh $ssh_user\@$orig_master_host \ " $ssh_stop_vip \"`;

}

sub usage {

  print

"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";

}

 

手動

[root@MHA4 ~]# masterha_stop --conf=/etc/masterha/app1.cnf

Stopped app1 successfully.

[1]+  Exit 1                  nohup masterha_manager --conf=/etc/masterha/app1.cnf > /masterha/app1/manager.log < /dev/null 2>&1

[root@MHA4 ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.19.0.172 --new_master_port=3306 --orig_master_is new_slave --running_updates_limit=10000

 

 

 

修復宕機的Master

[root@MHA4 ~]# grep -i "All other slaves should start replication from " /masterha/app1/app1.log

Wed Sep  3 17:53:08 2014 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.19.0.172', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000008', MASTER_LOG_POS=252, MASTER_USER='repl', MASTER_PASSWORD='xxx';

 

這很重要:

獲取上述信息後,就可以直接在修復後的master執行change master to 操作了。

 

 

masterha_master_switch --master_state=alive --conf=/etc/masterha/app1.cnf

##master成功切換回

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章