Amoeba整合MMM實現高可用負載均衡,讀寫分離,主從複製的MySQL

主機名

物理IP

集羣角色

servier_id

Monitor

192.168.1.134

MMM管理端

Master1

192.168.1.130

Master可讀、可寫

1

Master2

192.168.1.131

Master可讀、可寫

2

Slave1

192.168.1.132

Slave節點只讀

3

Slave2

192.168.1.133

Slave節點只讀

4

 

虛擬IP地址

IP角色

功能描述

192.168.1.140

IP

寫入VIP

192.168.1.141

IP

讀查詢VIP可以通過LVSHAProxy等負載均衡軟件對賭VIP進行負載均衡。

192.168.1.142

IP

192.168.1.143

IP

192.168.1.144

IP


1.所有DB節點主機下載mysql,mysql-server

2.編輯/etc/my.cnf文件

[mysqld]

read-only=1

server-id=1

log-bin=mysql-bin

relay-log=mysql-relay-bin

replication-wild-ignore-table=test.%

replication-wild-ignore-table=information_schema.%

其中server-id每臺主機分別爲1,2,3,4

3.創建複製用戶並授權

       A)首先在Master1mysql庫中創建複製用戶,

                mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.131’ identified by ‘123456’;

                mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.132’ identified by ‘123456’;

                mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.133’ identified by ‘123456’;

                 mysql> show master status;

 

B)然後在Master2mysql庫中將Master1設爲自己的主服務器

       mysql> change master to master_host=’192.168.1.130’,master_user=’repl_user’,

                master_password=’123456’,master_log_file=’mysql-bin.xxxxx’,

                master_log_pos=xxx;

        其中,master_log_file,master_log_pos的值由A步驟的show master status;

        mysql> start slave;

        mysql> show slave status;

        其中Slave_IO_RunningSlave_SQL_Running,這就是在從服務器節點上運行的主從複製線程,都應該爲Yes

 

C)分別在Slave1Slave2重複步驟B


D)在Master2mysql庫中創建複製用戶

           mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.130’ identified by ‘123456’;

           mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.132’ identified by ‘123456’;

           mysql> grant replication slave on *.* to ‘repl_user’@’192.168.1.133’ identified by ‘123456’;

           mysql> show master status;

 

E)然後在Master1mysql庫中將Master2設爲自己的主服務器

        mysql> change master to master_host=’192.168.1.131’,master_user=’repl_user’,

                        master_password=’123456’,master_log_file=’mysql-bin.xxxxx’,

                        master_log_pos=xxx;

                    mysql> show slave status;


4.MMM套件的安裝

A)在Monitor節點:

            yum -y install mysql-mmm*   (EPEL源)

B)在每個MySQL DB節點安裝mysql-mmm-agent即可。

            yum -y install mysql-mmm-agent


5.MMM集羣配置

A)所有mysql節點創建monitor usermonitor agent賬號

            mysql > grant replication client on *.* to 'mmm_monitor'@'192.168.1.%' identified by '123456';

            mysql > grant super,replication client,process on *.* to 'mmm_agent'@'192.168.1.%' identified by '123456';

            mysql > flush privileges;

B)配置mmm_common.conf文件(/etc/mysql-mmm/),然後分別複製到mysql節點。

active_master_role      writer

<host default>

    cluster_interface       eth0

    pid_path                /var/run/mysql-mmm/mmm_agentd.pid

    bin_path                /usr/libexec/mysql-mmm/

    replication_user        repl_user   #設置複製的用戶

    replication_password    123456 #設置複製的用戶

    agent_user              mmm_agent   #設置更改只讀操作的用戶

    agent_password          123456

</host>

<host db1>   設置db1的配置信息

    ip      192.168.1.130

    mode    master   設置db1的角色爲master

    peer    db2          設置與db1對等的主機名

</host>

<host db2>

    ip      192.168.1.131

    mode    master

    peer    db1

</host>

<host db3>

    ip      192.168.1.132

    mode    slave      設置db3的角色爲slave

</host>

<host db4>

    ip      192.168.1.133

    mode    slave

</host>

<role writer>    #設置可寫角色模式

    hosts   db1, db2

    ips     192.168.1.140     #設置可寫的虛擬IP

    mode    exclusive          db1和db2是互斥

</role>

<role reader>      #設置可讀角色模式

    hosts   db1, db2,db3,db4

    ips     192.168.1.141,192.168.1.142,192.168.1.143,192.168.1.144    #設置可讀的虛擬IP

    mode    balanced            負載均衡

</role>

 

C)配置mmm_agent.conf文件

        include mmm_common.conf

        this db1(mysql節點,分別換成對應的db1db2db3db4)


D)配置mmm_mon.conf文件(僅在MMM管理節點上)

include mmm_common.conf

<monitor>

    ip                  127.0.0.1

    pid_path            /var/run/mysql-mmm/mmm_mond.pid

    bin_path            /usr/libexec/mysql-mmm

    status_path         /var/lib/mysql-mmm/mmm_mond.status

    ping_ips            192.168.1.130,192.168.1.131,192.168.1.132,192.168.1.133 

測試網絡可用性的IP地址列表,有一個可ping通即爲網絡可用,但是不能寫本地IP

    flap_duration 3600    抖動的時間範圍

    flap_count 3

    auto_set_online     0

 

    # The kill_host_bin does not exist by default, though the monitor will

    # throw a warning about it missing.  See the section 5.10 "Kill Host

    # Functionality" in the PDF documentation.

    #

    # kill_host_bin     /usr/libexec/mysql-mmm/monitor/kill_host

    #

</monitor>

<host default>

    monitor_user        mmm_monitor      

    monitor_password    123456

</host>

debug 0


E)配置/etc/default/mysql-mmm-agent文件(所有mysql節點)

            ENABLED=1


6.MMM集羣管理

A)在Monitor運行:

             /etc/init.d/mysql-mmm-monitor start

B)在所有mysql節點運行:

            /etc/init.d/mysql-mmm-agent start

C)分別將mysql節點設置爲online狀態

             mmm_control set_online db# (1,2,3,4)

D)查看集羣運行狀態

             mmm_control show

              mmm_control checks all


7.測試MMM實現MySQL高可用功能

          讀寫分離測試

           讀表的時候使用mysql普通用戶進行操作

           故障轉移測試

                           把Master1節點的mysql服務關閉。再次查看MMM集羣運行狀態。

                           重啓Master1節點的mysql服務,再次查看MMM集羣運行狀態,手動切換Mastermmm_control move_role writer db1

           測試slave節點。


8.MySQL讀、寫分離

實現方案:MMM整合Amoeba應用架構

Amoeba作爲MySQL的分佈式數據前端代理層,主要在應用層訪問MySQL的時候充當SQL路由器功能,具有負載均衡,高可用性,SQL過濾,讀寫分離,通過Amoeba可以實現數據源的高可用、負載均衡、數據切片。

A)安裝和配置JDK環境(JavaSE1.5以上的JDK版本)

          將JDK安裝到/usr/local/目錄下,然後設置Java環境變量。

                 export JAVA_HOME=/usr/local/jdk1.6.0_45

                 export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:

                 $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

                 export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

B)安裝Amoeba

        mkdir /usr/local/amoeba

        tar xf amoeba-mysql-binary-2.2.0.tar.gz -C/usr/local/amoeba

        啓動Amoeba

                chmod +x -R /usr/local/amoeba/bin

                /usr/local/amoeba/bin/amoeba start

                出現一個問題:

                       The stack size specified is too small, Specify at least 160k

                       Could not create the Java virtual machine.

                        把/usr/local/amoeba/bin/amoeba中的

                                   DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss128k"

                        改爲:

                                   DEFAULT_OPTS="-server -Xms256m -Xmx256m -Xss256k"

                  正常啓動的時候是這樣:

                             log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

                             2016-03-27 11:04:39,568 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-2.2.0

                              log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

                              2016-03-27 11:04:39,949 INFO  net.ServerableConnectionManager - Amoeba for Mysql listening on 0.0.0.0/0.0.0.0:8066.

                              2016-03-27 11:04:39,954 INFO  net.ServerableConnectionManager - Amoeba Monitor Server listening on /127.0.0.1:63260.

                                要時常關注這個,可以發現問題。

C)配置Amoeba

實現讀寫分離功能,僅需要dbServers.xmlamoeba.xml

首先配置dbServers.xml文件


<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

<!--

Each dbServer needs to be configured into a Pool,

If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

 add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

 such as 'multiPool' dbServer   

-->

<dbServer name="abstractServer" abstractive="true">

<factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

<property name="manager">${defaultManager}</property>

<property name="sendBufferSize">64</property>

<property name="receiveBufferSize">128</property>

    

<!-- mysql port -->

<property name="port">3306</property>

#下面這個配置用於設置Amoeba默認連接的數據庫名,操作表必須顯示指定數據庫名db.table,否則會在repdb下進行操作

<!-- mysql schema -->

<property name="schema">repdb</property>

#Amoeba連接後端數據庫服務器的賬號和密碼,因此,需要在mysql集羣中創建該用戶,並授權Amoeba服務器可連接。

<!-- mysql user -->

<property name="user">ixdba</property>

<!--  mysql password -->

<property name="password">123456</property>

</factoryConfig>

 

<poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">

<property name="maxActive">500</property>#配置最大連接數

<property name="maxIdle">500</property>#配置最大空閒連接數

<property name="minIdle">10</property>#最小連接數

<property name="minEvictableIdleTimeMillis">600000</property>

<property name="timeBetweenEvictionRunsMillis">600000</property>

<property name="testOnBorrow">true</property>

<property name="testOnReturn">true</property>

<property name="testWhileIdle">true</property>

</poolConfig>

</dbServer>

#設置一個後端可寫dbServer,這裏定義爲writedb

<dbServer name="writedb"  parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

#MMM集羣提供對外訪問的可寫VIP地址

<property name="ipAddress">192.168.1.140</property>

</factoryConfig>

</dbServer>

#設置可讀dbServer

<dbServer name="slave1"  parent="abstractServer">

<factoryConfig>

<!-- mysql ip -->

#MMM集羣提供對外訪問的可讀VIP地址

<property name="ipAddress">192.168.1.141</property>

</factoryConfig>

</dbServer>

<dbServer name="slave2"  parent="abstractServer">

<factoryConfig>

                <!-- mysql ip -->

                        <property name="ipAddress">192.168.1.142</property>

                </factoryConfig>

        </dbServer>

 

<dbServer name="slave3"  parent="abstractServer">

<factoryConfig>

                <!-- mysql ip -->

                        <property name="ipAddress">192.168.1.143</property>

                </factoryConfig>

        </dbServer>

 

<dbServer name="slave4"  parent="abstractServer">

<factoryConfig>

                <!-- mysql ip -->

                        <property name="ipAddress">192.168.1.144</property>

                </factoryConfig>

        </dbServer>

 

#dbServer組,將可讀的數據庫IP統一放到一組中。

<dbServer name="myslaves" virtual="true">

<poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

<!-- Load balancing strategy: 1=ROUNDROBIN均衡,2=WEIGHTBASED權重, 3=HA-->

<property name="loadbalance">1</property>

<!-- Separated by commas,such as: server1,server2,server1 -->

<property name="poolNames">slave1,slave2,slave3,slave4</property>

</poolConfig>

</dbServer>

</amoeba:dbServers>

 

然後配置另一個文件amoeba.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

 

<proxy>

<!-- service class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">

<!-- port -->

#Amoeba監聽的端口,默認爲8066

<property name="port">8066</property>

<!-- bind ipAddress -->

<!--

<property name="ipAddress">127.0.0.1</property>

 -->

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

<property name="sendBufferSize">128</property>

<property name="receiveBufferSize">64</property>

</bean>

</property>

<property name="authenticator">

<bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

#設置客戶端連接Amoeba時需要使用的賬號和密碼。

#實際使用的時候,是這樣mysql -uroot -p123456 -h192.168.1.134(本地ip-P8066

<property name="user">root</property>

<property name="password">123456</property>

<property name="filter">

<bean class="com.meidusa.amoeba.server.IPAccessController">

<property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

</bean>

</property>

</bean>

</property>

</service>

<!-- server class must implements com.meidusa.amoeba.service.Service -->

<service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">

<!-- port -->

<!--  default value: random number

<property name="port">9066</property>

-->

<!-- bind ipAddress -->

<property name="ipAddress">127.0.0.1</property>

<property name="daemon">true</property>

<property name="manager">${clientConnectioneManager}</property>

<property name="connectionFactory">

<bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>

</property>

</service>

<runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

<!-- proxy server net IO Read thread size -->

<property name="readThreadPoolSize">20</property>

<!-- proxy server client process thread size -->

<property name="clientSideThreadPoolSize">30</property>

<!-- mysql server data packet process thread size -->

<property name="serverSideThreadPoolSize">30</property>

<!-- per connection cache prepared statement size  -->

<property name="statementCacheSize">500</property>

<!-- query timeout( default: 60 second , TimeUnit:second) -->

<property name="queryTimeout">60</property>

</runtime>

</proxy>

<!--

Each ConnectionManager will start as thread

manager responsible for the Connection IO read , Death Detection

-->

<connectionManagerList>

<connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>

<!--

  default value is avaliable Processors

<property name="processors">5</property>

 -->

</connectionManager>

<connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">

<property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>

<!--

  default value is avaliable Processors

<property name="processors">5</property>

 -->

</connectionManager>

</connectionManagerList>

<!-- default using file loader -->

<dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

<property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

</dbServerLoader>

<queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

<property name="ruleLoader">

<bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

<property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

<property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

</bean>

</property>

<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

<property name="LRUMapSize">1500</property>

#Amoeba默認的池。

<property name="defaultPool">writedb</property>

#定義好的兩個讀、寫池。

<property name="writePool">writedb</property>

<property name="readPool">myslaves</property>

<property name="needParse">true</property>

</queryRouter>

</amoeba:configuration>

 

D)設置Amoeba登錄數據庫權限

             在MMM集羣的所有mysql節點上執行,爲Amoeba訪問MMM集羣中所有mysql數據庫節點授權。

                     GRANT ALL ON repdb.* TO'ixdba'@'192.168.1.134' identified by ‘123456’;

                      FLUSH PRICILEGES;


E)測試Amoeba實現讀、寫分離和負載均衡。

            MMM集羣的所有mysql節點開啓mysql的查詢日誌,方便檢驗是否成功。

            在/etc/my.cnf添加如下:

            log=/var/log/mysql_query_log(此文件自己創建,對mysql可寫)

 

            在每個mysql節點的test庫中創建一張表,表名爲mmm_test.

                      mysql> use test;

                       mysql> create table mmm_test (id int,email varchar(60));

                       mysql> insert into mmm_test (id,email) values (100,'this is本地真實IP’);

 

在遠程MySQL客戶端,通過Amoeba配置文件(amoeba.xml)中指定的用戶名、密碼、端口號以及Amoeba服務器的IP地址連接到MySQL數據庫中:

                          mysql -uroot -p123456 -h192.168.1.134 -P8066

 

                           mysql> select * from test.mmm_test;

                           +------+-----------------------+

                           | id   | email                 |

                          +------+-----------------------+

                           |  100 | this is 192.168.1.130 |

                         +------+-----------------------+

                          mysql> select * from test.mmm_test;

                         +------+-----------------------+

                          | id   | email                 |

                        +------+-----------------------+

                         |  100 | this is 192.168.1.132 |

                        +------+-----------------------+

                         由此可見負載均衡實現了。

                         如果是這樣:

                           ERROR 2006 (HY000): MySQL server has gone away
                           No connection. Trying to reconnect...
                           Connection id:    11416420

 

                          就要看看你配置文件的dbServers.xml中:

                         <!-- mysql user -->
                        <property name="user">root</property>
                        <property name="password">password</property>
                        
賬號密碼是否配置對了。


然後測試讀寫分離,創建兩張表mmm_test1mmm_test2

        mysql> create table mmm_test1 (id int,email varchar(60));

         mysql> create table mmm_test2 (id int,email varchar(60));

         mysql > insert into mmm_test1 (id,email) values (103,'[email protected]');

         mysql > drop table mmm_test2;

查看mysqllog日誌,由於所有節點都是隻讀狀態,但是由於配置了,MMM,主節點有寫權限。


思路是:先構建Mysql主從複製,再使用MMM實現監控和管理MySQL Master-Master的複製和服務狀態,和監控Slave節點的複製以及運行狀態,任意節點發生故障時是想自動切換的功能,如果任意節點故障,MMM集羣就會自動屏蔽故障節點。通過Amoeba實現MySQL讀寫分離.


我是初學者,這是看書學習,有很多不足地方,請大家多多指教

發佈了53 篇原創文章 · 獲贊 19 · 訪問量 7萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章