MySQL+DRBD+Corosync+Pacemaker CentOS6.5版


一、DRBD部分配置


   1、安裝環境說明

              node1       192.168.110.141

              node2       192.168.110.142


       Node1:

              # sed -i 's@\(HOSTNAME=\).*@\1node1.pancou.com@g' /etc/sysconfig/network

              # hostname node1.pancou.com

              # vim /etc/hosts

              192.168.110.141 node1.pancou.com node1

192.168.110.142 node2.pancou.com node2



       Node2:

              # sed -i 's@\(HOSTNAME=\).*@\1node2.pancou.com@g' /etc/sysconfig/network

              # hostname node1.pancou.com

              # vim /etc/hosts

              192.168.110.141 node1 node1.pancou.com 

192.168.110.142 node2 node2.pancou.com 

    

    2、設定兩個節點可以基於密鑰進行ssh通信,這可以通過類似如下的命令實現


        Node1:

             [root@node1 ~]# yum install openssh-clients

             [root@node1 ~]# ssh-keygen -t rsa

             [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

        Node2:

             [root@node2 ~]# yum install openssh-clients

             [root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]


    3、安裝軟件包

              

          drbd共有兩部分組成:內核模塊和用戶空間的管理工具。其中drbd內核模塊代碼已經整合進Linux內核2.6.33以後的版本中,因此,如果您的內核版本高於此版本的話,你只需要安裝管理工具即可;否則,您需要同時安裝內核模塊和管理工具兩個軟件包,並且此兩者的版本號一定要保持對應。


           目前適用CentOS 5的drbd版本主要有8.0、8.2、8.3三個版本,其對應的rpm包的名字分別爲drbd, drbd82和drbd83,對應的內核模塊的名字分別爲kmod-drbd, kmod-drbd82和kmod-drbd83。而適用於CentOS 6的版本爲8.4,其對應的rpm包爲drbd和drbd-kmdl,但在實際選用時,要切記兩點:drbd和drbd-kmdl的版本要對應;另一個是drbd-kmdl的版本要與當前系統的內容版本相對應。各版本的功能和配置等略有差異;我們實驗所用的平臺爲x86_64且系統爲CentOS 6.5,因此需要同時安裝內核模塊和管理工具。我們這裏選用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm),下載地址爲ftp://rpmfind.net/linux/atrpms/,請按照需要下載。


           實際使用中,您需要根據自己的系統平臺等下載符合您需要的軟件包版本,這裏不提供各版本的下載地址。

           [root@node1 ~]# rpm -ivh  drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm 

              warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY

              Preparing...                ########################################### [100%]

                 1:drbd-kmdl-2.6.32-431.el########################################### [ 50%]

                 2:drbd                   ########################################### [100%]

            [root@node1 ~]# scp drbd-* node2:/root/

drbd-8.4.3-33.el6.x86_64.rpm                                                           100%  283KB 283.3KB/s   00:00    

drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm                                       100%  145KB 145.2KB/s   00:00 

               

             [root@node2 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm 

             warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY

             Preparing...                ########################################### [100%]

                1:drbd-kmdl-2.6.32-431.el########################################### [ 50%]

                2:drbd                   ########################################### [100%]

    4、配置drbd


          drbd的主配置文件爲/etc/drbd.conf;爲了管理的便捷性,目前通常會將些配置文件分成多個部分,且都保存至/etc/drbd.d/目錄中,主配置文件中僅使用"include"指令將這些配置文件片斷整合起來。通常,/etc/drbd.d目錄中的配置文件爲global_common.conf和所有以.res結尾的文件。其中global_common.conf中主要定義global段和common段,而每一個.res的文件用於定義一個資源。


          在配置文件中,global段僅能出現一次,且如果所有的配置信息都保存至同一個配置文件中而不分開爲多個文件的話,global段必須位於配置文件的最開始處。目前global段中可以定義的參數僅有minor-count, dialog-refresh, disable-ip-verification和usage-count。


          common段則用於定義被每一個資源默認繼承的參數,可以在資源定義中使用的參數都可以在common段中定義。實際應用中,common段並非必須,但建議將多個資源共享的參數定義爲common段中的參數以降低配置文件的複雜度。


          resource段則用於定義drbd資源,每個資源通常定義在一個單獨的位於/etc/drbd.d目錄中的以.res結尾的文件中。資源在定義時必須爲其命名,名字可以由非空白的ASCII字符組成。每一個資源段的定義中至少要包含兩個host子段,以定義此資源關聯至的節點,其它參數均可以從common段或drbd的默認中進行繼承而無須定義。

                        

           下面的操作在node1.magedu.com上完成。

           [root@node1 ~]# cat /etc/drbd.conf 

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example


include "drbd.d/global_common.conf";

include "drbd.d/*.res";

               [root@node1 ~]# ls /etc/drbd.d/

global_common.conf  


        1. 配置/etc/drbd.d/global-common.conf


            [root@node1 ~]# cat /etc/drbd.d/global_common.conf 

global {

usage-count no;

# minor-count dialog-refresh disable-ip-verification

}


common {

handlers {

# These are EXAMPLE handlers only.

# They may have severe implications,

# like hard resetting the node under certain circumstances.

# Be careful when chosing your poison.


pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";

# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";

# split-brain "/usr/lib/drbd/notify-split-brain.sh root";

# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";

# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

}


startup {

# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb

}


options {

# cpu-mask on-no-data-accessible

}


disk {

# size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes

# disk-drain md-flushes resync-rate resync-after al-extents

        # c-plan-ahead c-delay-target c-fill-target c-max-rate

        # c-min-rate disk-timeout

on-io-error detach;

}


net {

# protocol timeout max-epoch-size max-buffers unplug-watermark

# connect-int ping-int sndbuf-size rcvbuf-size ko-count

# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri

# after-sb-1pri after-sb-2pri always-asbp rr-conflict

# ping-timeout data-integrity-alg tcp-cork on-congestion

# congestion-fill congestion-extents csums-alg verify-alg

# use-rle

  protocol C;

cram-hmac-alg "sha1";

shared-secret "mydrbdlab";

}

syncer {

rate 1000M;

}

}

           2、定義一個資源/etc/drbd.d/mystorage.res,內容如下:


               [root@node1 ~]# cat /etc/drbd.d/mystorage.res 

resource mystorage {

on node1.pancou.com {

device  /dev/drbd0;

disk  /dev/sdb;

address   192.168.110.141:7789;

meta-disk internal;

}

on node2.pancou.com {

device  /dev/drbd0;

disk  /dev/sdb;

address   192.168.110.142:7789;

meta-disk internal;

}

}


             以上文件在兩個節點上必須相同,因此,可以基於ssh將剛纔配置的文件全部同步至另外一個節點。


             [root@node1 ~]# ls /etc/drbd.d/

global_common.conf  mystorage.res

[root@node1 ~]# scp /etc/drbd.d/* node2:/etc/drbd.d/

global_common.conf                                                                     100% 1949     1.9KB/s   00:00    

mystorage.res 


       3、在兩個節點上初始化已定義的資源並啓動服務:


        1)初始化資源,在Node1和Node2上分別執行:

          [root@node1 ~]# iptables -F

          [root@node1 ~]# ssh node2 "iptables -F"

          [root@node1 ~]# drbdadm create-md mystorage

Writing meta data...

initializing activity log

NOT initializing bitmap

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

New drbd meta data block successfully created.

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

           [root@node1 ~]# ssh node2 "drbdadm create-md mystorage"

NOT initializing bitmap

Writing meta data...

initializing activity log

New drbd meta data block successfully created.

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

         

        2)啓動服務,在Node1和Node2上分別執行:

           [root@node1 ~]# /etc/init.d/drbd start

Starting DRBD resources: [

     create res: mystorage

   prepare disk: mystorage

    adjust disk: mystorage

     adjust net: mystorage

]

..........

***************************************************************

 DRBD's startup script waits for the peer node(s) to appear.

..........

            [root@node2 ~]# /etc/init.d/drbd start

Starting DRBD resources: [

     create res: mystorage

   prepare disk: mystorage

    adjust disk: mystorage

     adjust net: mystorage

]

.

         3)查看啓動狀態:

            [root@node1 ~]# cat /proc/drbd 

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00

 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10485404


             ro:表示角色信息,第一次啓動drbd時,兩個drbd節點默認都處於Secondary狀態。

             ds:表示磁盤狀態信息,“Inconsistent/Inconsistent” 即 “不一致/不一致”狀態,表示兩個節點的磁盤數據處於不一致狀態。

             ns:表示網絡發送的數據包信息。

             dw:表示磁盤寫信息。

             dr:表示磁盤讀信息。


          也可以使用drbd-overview命令來查看:

            [root@node1 ~]# drbd-overview 

  0:mystorage/0  Connected Secondary/Secondary Inconsistent/Inconsistent C r----- 


              從上面的信息中可以看出此時兩個節點均處於Secondary狀態。於是,我們接下來需要將其中一個節點設置爲Primary。


         4)設置主節點

              在要設置爲Primary的節點上執行如下命令:

            [root@node1 ~]# drbdadm --force primary mystorage             #在節點上把其中一個提升爲主的(第一次)


            注: 也可以在要設置爲Primary的節點上使用如下命令來設置主節點:

               # drbdadm -- --overwrite-data-of-peer primary mystorage


             而後再次查看狀態,可以發現數據同步過程已經開始:  

            [root@node1 ~]# drbd-overview

  0:mystorage/0  SyncSource Primary/Secondary UpToDate/Inconsistent C r---n- 

[>...................] sync'ed:  9.7% (9248/10236)M   

             

             也可以查看動態信息:


             [root@node1 ~]# watch -n 10 "cat /proc/drbd"       

Every 10.0s: cat /proc/drbd                                                                      Fri Aug 12 00:28:07 2016


version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00

 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-

    ns:5382940 nr:0 dw:0 dr:5389976 al:0 bm:328 lo:0 pe:2 ua:7 ap:0 ep:1 wo:f oos:5104284

        [=========>..........] sync'ed: 51.4% (4984/10236)M

        finish: 0:01:11 speed: 71,108 (61,848) K/sec


Every 10.0s: cat /proc/drbd                                                                      Fri Aug 12 00:28:47 2016


version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00

 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-

    ns:8115556 nr:0 dw:0 dr:8123032 al:0 bm:495 lo:0 pe:2 ua:7 ap:0 ep:1 wo:f oos:2371228

        [==============>.....] sync'ed: 77.5% (2312/10236)M

        finish: 0:00:37 speed: 63,216 (63,888) K/sec


Every 10.0s: cat /proc/drbd                                                                      Fri Aug 12 00:29:17 2016


version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00

 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-

    ns:10091464 nr:0 dw:0 dr:10099352 al:0 bm:615 lo:0 pe:2 ua:8 ap:0 ep:1 wo:f oos:395932

        [==================>.] sync'ed: 96.3% (384/10236)M

        finish: 0:00:05 speed: 67,204 (63,856) K/sec  


            等數據同步完成以後再次查看狀態,可以發現節點已經實時狀態,且節點已經有了主次:

              [root@node1 ~]# drbd-overview 

  0:mystorage/0  Connected Primary/Secondary UpToDate/UpToDate C r-----


              [root@node1 ~]# watch -n 10 "cat /proc/drbd" 

Every 10.0s: cat /proc/drbd                                                                      Fri Aug 12 00:31:37 2016


version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00

 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----

    ns:10485404 nr:0 dw:0 dr:10486068 al:0 bm:640 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0


    4、創建文件系統


          文件系統的掛載只能在Primary節點進行,因此,也只有在設置了主節點後才能對drbd設備進行格式化:

          # mke2fs -j -L DRBD /dev/drbd0

          # mkdir /mnt/drbd 

          # mount /dev/drbd0 /mnt/drbd 

          

          [root@node1 ~]# mkfs -t ext4 /dev/drbd0

          [root@node1 ~]# mkdir /data

          [root@node1 ~]# mount /dev/drbd0 /data/

          [root@node1 ~]# ls /data/

          lost+found


    5、測試切換Primary和Secondary節點

          對主Primary/Secondary模型的drbd服務來講,在某個時刻只能有一個節點爲Primary,因此,要切換兩個節點的角色,只能在先將原有的Primary節點設置爲Secondary後,才能將原來的Secondary節點設置爲Primary:

          

          [root@node1 ~]# cp /etc/inittab /data/

          [root@node1 ~]# ls /data/

          inittab  lost+found

          [root@node1 ~]# vim /data/inittab

          # 

          id:3:initdefault:

          hello!!!


       1)切換node1 Primary爲Secondary節點

          [root@node1 ~]# umount /data

          [root@node1 ~]# drbd-overview 

            0:mystorage/0  Connected Primary/Secondary UpToDate/UpToDate C r----- 

          [root@node1 ~]# drbdadm secondary mystorage

          [root@node1 ~]# drbd-overview 

            0:mystorage/0  Connected Secondary/Secondary UpToDate/UpToDate C r----- 


        2)切換node2 爲Primary

           [root@node2 ~]# drbd-overview 

              0:mystorage/0  Connected Secondary/Secondary UpToDate/UpToDate C r----- 

           [root@node2 ~]# drbdadm primary mystorage

           [root@node2 ~]# drbd-overview 

              0:mystorage/0  Connected Primary/Secondary UpToDate/UpToDate C r----- 

           [root@node2 ~]# mkdir /data

           [root@node2 ~]# mount /dev/drbd0 /data/

           [root@node2 ~]# ls /data/

           inittab  lost+found

           [root@node2 ~]# cat /data/inittab 

           # 

           id:3:initdefault:

           hello!!!


二、Corosync+Pacemaker部分配置


       爲了讓高可用的配置順利,兩個節點都不能設置爲主的,而且都不能啓動,也不能開機自動啓動,所以卸載降級

       [root@node1 ~]# umount /data/

       [root@node1 ~]# drbdadm secondary mystorage

       [root@node1 ~]# drbd-overview 

         0:mystorage/0  Connected Secondary/Secondary UpToDate/UpToDate C r----- 

       [root@node1 ~]# service drbd stop

       Stopping all DRBD resources: .

       [root@node1 ~]# chkconfig --add drbd 

       [root@node1 ~]# chkconfig drbd off

       [root@node1 ~]# chkconfig --list drbd 

       drbd           0:off 1:off 2:off 3:off 4:off 5:off 6:off

       [root@node2 ~]# service drbd stop

       Stopping all DRBD resources: .

       [root@node2 ~]# chkconfig --add drbd

       [root@node2 ~]# chkconfig drbd off

       [root@node2 ~]# chkconfig --list drbd 

       drbd           0:off 1:off 2:off 3:off 4:off 5:off 6:off


    1、安裝corosync + pacemaker,這裏直接用yum來安裝,兩個節點都要安裝上

       (1)、Corosync

             Corosync是集羣管理套件的一部分,它在傳遞信息的時候可以通過一個簡單的配置文件來定義信息傳遞的方式和協議等。它是一個新興的軟件,2008年推出,但其實它並不是一個真正意義上的新軟件,在2002年的時候有一個項目Openais , 它由於過大,分裂爲兩個子項目,其中可以實現HA心跳信息傳輸的功能就是Corosync ,它的代碼60%左右來源於Openais. Corosync可以提供一個完整的HA功能,但是要實現更多,更復雜的功能,那就需要使用Openais了。Corosync是未來的發展方向。在以後的新項目裏,一般採用Corosync,而hb_gui可以提供很好的HA管理功能,可以實現圖形化的管理。另外相關的圖形化有RHCS的套件luci+ricci,當然還有基於java開發的LCMC集羣管理工具。


             corosync只提供了message layer(即實現HeartBeat + CCM),而沒有直接提供CRM,一般使用Pacemaker進行資源管理。


        (2)、Pacemaker

             Pacemaker是一個集羣資源管理器。它利用集羣基礎構件(OpenAIS 、heartbeat或corosync)提供的消息和成員管理能力來探測並從節點或資源級別的故障中恢復,以實現羣集服務(亦稱資源)的最大可用性。


             Pacemaker是由Linux-HA工程的Heartbeat項目中的CRM組件發展而來。 Heartbeat 3開始,Heartbeat 3 按模塊把的原來Heartbeat

             拆分爲多個子項目。CRM組件單獨由另一獨立項目Pacemaker 負責。單獨成一個項目之後,Pacemaker以做一個可擴展性

             高可用集羣資源管理器(A scalable High-Availability cluster resource manager)爲目標,並同時支持Corosync和Heartbeat。

             

             從pacemaker 1.1.8開始,crmsh發展成了一個獨立項目,叫crmsh。pacemaker默認不提供命令接口工具,需要單獨安裝crmsh。

             RHEL自6.4起不再提供集羣的命令行配置工具crmsh,轉而使用pcs;如果你習慣了使用crm命令,可下載相關的程序包自行安裝即可。crmsh依賴於pssh,因此需要一併下載。


             crmsh和pssh需要下載:

                http://crmsh.github.io/download/


    2、安裝和配置corosync和pacemaker,(以下命令在web1.pancou.com上執行)

       [root@node1 ~]# yum install -y corosync pacemaker

       

       [root@node2 ~]#  yum install -y corosync pacemaker

       

    3、配置corosync

       [root@node1 ~]# ls /etc/corosync/

       corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d

       [root@node1 ~]# cd /etc/corosync/

       [root@node1 corosync]# mv corosync.conf.example /etc/corosync/corosync.conf

       [root@node1 corosync]# ls

       corosync.conf  corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d

       [root@node1 corosync]# vim corosync.conf

[root@node1 corosync]# cat corosync.conf

# Please read the corosync.conf.5 manual page

compatibility: whitetank


totem {

version: 2


# secauth: Enable mutual node authentication. If you choose to

# enable this ("on"), then do remember to create a shared

# secret with "corosync-keygen".

secauth: on


threads: 0


# interface: define at least one interface to communicate

# over. If you define more than one interface stanza, you must

# also set rrp_mode.

interface {

                # Rings must be consecutively numbered, starting at 0.

ringnumber: 0

# This is normally the *network* address of the

# interface to bind to. This ensures that you can use

# identical instances of this configuration file

# across all your cluster nodes, without having to

# modify this option.

bindnetaddr: 192.168.110.0

# However, if you have multiple physical network

# interfaces configured for the same subnet, then the

# network address alone is not sufficient to identify

# the interface Corosync should bind to. In that case,

# configure the *host* address of the interface

# instead:

# bindnetaddr: 192.168.1.1

# When selecting a multicast address, consider RFC

# 2365 (which, among other things, specifies that

# 239.255.x.x addresses are left to the discretion of

# the network administrator). Do not reuse multicast

# addresses across multiple Corosync clusters sharing

# the same network.

mcastaddr: 239.255.110.1

# Corosync uses the port you specify here for UDP

# messaging, and also the immediately preceding

# port. Thus if you set this to 5405, Corosync sends

# messages over UDP ports 5405 and 5404.

mcastport: 5405

# Time-to-live for cluster communication packets. The

# number of hops (routers) that this ring will allow

# itself to pass. Note that multicast routing must be

# specifically enabled on most network routers.

ttl: 1

}

}


logging {

# Log the source file and line where messages are being

# generated. When in doubt, leave off. Potentially useful for

# debugging.

fileline: off

# Log to standard error. When in doubt, set to no. Useful when

# running in the foreground (when invoking "corosync -f")

to_stderr: no

# Log to a log file. When set to "no", the "logfile" option

# must not be set.

to_logfile: yes

logfile: /var/log/cluster/corosync.log

# Log to the system log daemon. When in doubt, set to yes.

to_syslog: no

# Log debug messages (very verbose). When in doubt, leave off.

debug: off

# Log messages with time stamps. When in doubt, set to on

# (unless you are only logging to syslog, where double

# timestamps can be annoying).

timestamp: on

logger_subsys {

subsys: AMF

debug: off

}

}

service {

  ver:  0

  name: pacemaker                               

  # use_mgmtd: yes         

}

aisexec {

user: root

group: root

}


        生成節點間通信時用到的認證密鑰文件:

        用corosync-keygen生成key時,由於要使用/dev/random生成隨機數,因此如果新裝的系統操作不多,

        如果沒有足夠的熵,狂敲鍵盤即可,隨意敲,敲夠即可。

        [root@node1 corosync]# corosync-keygen 

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 976).

fasdkfmakdsjfksdklakfo kdasokfodks afokodsaofjidjasfsjdaojfijdasifjidsajfij dsiajfijdasijfijeawijfdmkslngjkadsfkofksdaojfkdasjfkaodsklfpdsakfoewpkafodskfokasofmdosakfokdsaofksdoakfodsaofksdaokfdosakfodsamfkasdkfopsdkaofksdaopkfodsakfosdkaopfkdsoafosadkfopdsakfodskfodskaofdsakfodspafkodlsmfkdasmfklsadkfpeoakefoewkoakak3kfeowakfolkdjgfewif9ik3owqkfeagfaioe934gjkfewoakgfoekasogmnkasdgmolkafr30or30kroeawksfokjeadf949oweakofk9w0ai3owekaeoskgoja9w3eoajgoeaoegf3wwa3a43tfg4awg4ega3e3ag3a

.....................

         [root@node1 corosync]# ls

authkey  corosync.conf  corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d

[root@node1 corosync]# scp authkey corosync.conf node2:/etc/corosync/

authkey                                                                                100%  128     0.1KB/s   00:00    

corosync.conf                                                                          100% 2766     2.7KB/s   00:00 

         

        4、啓動corosync(以下命令在web1上執行):


           [root@node1 ~]# service corosync start

           Starting Corosync Cluster Engine (corosync):               [  OK  ]

           

           [root@node1 ~]# ssh node2 "service corosync start"

           Starting Corosync Cluster Engine (corosync): [  OK  ]

           

        5、檢查啓動情況


             查看corosync引擎是否正常啓動:

             [root@node1 ~]#  grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log

Aug 12 02:04:11 corosync [MAIN  ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.

Aug 12 02:04:11 corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

Aug 12 02:06:32 corosync [MAIN  ] Corosync Cluster Engine exiting with status 0 at main.c:2055.


             查看初始化成員節點通知是否正常發出:

             [root@node1 ~]# grep  TOTEM  /var/log/cluster/corosync.log

Aug 12 02:04:11 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).

Aug 12 02:04:11 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Aug 12 02:04:11 corosync [TOTEM ] The network interface [192.168.110.141] is now up.

Aug 12 02:04:11 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.

Aug 12 02:04:11 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.


              

              檢查啓動過程中是否有錯誤產生。下面的錯誤信息表示packmaker不久之後將不再作爲corosync的插件運行,因此,建議使用cman作爲集羣基礎架構服務;此處可安全忽略。

              [root@node1 ~]# grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources

Aug 12 03:25:34 corosync [pcmk  ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.

Aug 12 03:25:34 corosync [pcmk  ] ERROR: process_ais_conf:  Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN

Aug 12 03:26:42 corosync [pcmk  ] ERROR: pcmk_wait_dispatch: Child process crmd exited (pid=3370, rc=201)

              

             查看pacemaker是否正常啓動:

             [root@node1 ~]# 

Aug 12 03:25:34 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized

Aug 12 03:25:34 corosync [pcmk  ] Logging: Initialized pcmk_startup

Aug 12 03:25:34 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615

Aug 12 03:25:34 corosync [pcmk  ] info: pcmk_startup: Service: 9

Aug 12 03:25:34 corosync [pcmk  ] info: pcmk_startup: Local hostname: node1.pancou.com

              

              執行ps auxf命令可以查看corosync啓動的各相關進程。

             [root@node1 ~]# ps auxf |grep corosync

root      3401  0.0  0.1 103252   856 pts/2    S+   03:36   0:00      |   \_ grep corosync

root      3360  1.0  0.9 612116  4912 ?        Ssl  03:25   0:06 corosync

             [root@node1 ~]# ps auxf |grep pacemaker

root      3399  0.0  0.1 103252   864 pts/2    S+   03:33   0:00      |   \_ grep pacemaker

189       3365  0.1  2.1  94072 10840 ?        S<   03:25   0:00  \_ /usr/libexec/pacemaker/cib

root      3366  0.0  0.9  95136  4624 ?        S<   03:25   0:00  \_ /usr/libexec/pacemaker/stonithd

root      3367  0.0  0.6  62968  3352 ?        S<   03:25   0:00  \_ /usr/libexec/pacemaker/lrmd

189       3368  0.0  0.7  86012  3736 ?        S<   03:25   0:00  \_ /usr/libexec/pacemaker/attrd

189       3369  0.0  0.7  81552  3628 ?        S<   03:25   0:00  \_ /usr/libexec/pacemaker/pengine

189       3385  0.0  0.7  94956  3840 ?        S<   03:26   0:00  \_ /usr/libexec/pacemaker/crmd

              

              [root@node1 ~]# netstat -anptul

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      2025/sshd           

tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1122/master         

tcp        0      0 192.168.110.141:22          192.168.110.1:53648         ESTABLISHED 1963/sshd           

tcp        0     52 192.168.110.141:22          192.168.110.1:61309         ESTABLISHED 2807/sshd           

tcp        0      0 :::22                       :::*                        LISTEN      2025/sshd           

tcp        0      0 ::1:25                      :::*                        LISTEN      1122/master         

udp        0      0 192.168.110.141:5404        0.0.0.0:*                               3360/corosync       

udp        0      0 192.168.110.141:5405        0.0.0.0:*                               3360/corosync       

udp        0      0 239.255.110.1:5405          0.0.0.0:*                               3360/corosync     

              查看web2上的信息:

             如果上面命令執行均沒有問題,接着可以執行如下命令啓動node2上的corosync

             # ssh node2 -- /etc/init.d/corosync start


             注意:啓動node2需要在node1上使用如上命令進行,不要在node2節點上直接啓動。下面是node1上的相關日誌。

             [root@node2 corosync]# tail /var/log/cluster/corosync.log 

Aug 12 03:27:15 [2529] node2.pancou.com    pengine:     info: determine_online_status_fencing: Node node2.pancou.com is active

Aug 12 03:27:15 [2529] node2.pancou.com    pengine:     info: determine_online_status: Node node2.pancou.com is online

Aug 12 03:27:15 [2529] node2.pancou.com    pengine:   notice: stage6: Delaying fencing operations until there are resources to manage

Aug 12 03:27:15 [2530] node2.pancou.com       crmd:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]

Aug 12 03:27:15 [2530] node2.pancou.com       crmd:     info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1470943635-19) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2

Aug 12 03:27:15 [2530] node2.pancou.com       crmd:   notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete

Aug 12 03:27:15 [2530] node2.pancou.com       crmd:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE

Aug 12 03:27:15 [2530] node2.pancou.com       crmd:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]

Aug 12 03:27:15 [2529] node2.pancou.com    pengine:   notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2

Aug 12 03:27:15 [2529] node2.pancou.com    pengine:   notice: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.

             root@node1 ~]# crm

crm(live)# status

Last updated: Fri Aug 12 03:37:27 2016 Last change: Fri Aug 12 03:26:10 2016 by hacluster via crmd on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 0 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]

crm(live)# configure

ERROR: CIB not supported: validator 'pacemaker-2.4', release '3.0.10'

ERROR: You may try the upgrade command

crm(live)configure# cd

crm(live)# quit

bye

             [root@node1 ~]# service corosync stop

Signaling Corosync Cluster Engine (corosync) to terminate: [  OK  ]

Waiting for corosync services to unload:.                  [  OK  ]

[root@node1 ~]# ssh node2 "service corosync stop"

Signaling Corosync Cluster Engine (corosync) to terminate: [  OK  ]

Waiting for corosync services to unload:.[  OK  ]

              

              http://crmsh.github.io/download/

[network_ha-clustering_Stable]

name=Stable High Availability/Clustering packages (CentOS_CentOS-6)

type=rpm-md

baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/

gpgcheck=1

gpgkey=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6//repodata/repomd.xml.key

enabled=1

              [root@node1 ~]# vim /etc/yum.repos.d/CentOS-Base.repo 

                把上面的信息粘貼到CentOS-Base.repo最下面


              [root@node1 ~]# yum install crmsh pssh              ----node2同樣

              [root@node1 ~]# crm status

Last updated: Fri Aug 12 03:55:12 2016 Last change: Fri Aug 12 03:26:24 2016 by hacluster via crmd on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 0 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:

               [root@node1 ~]# crm

crm(live)# configure show

node node1.pancou.com

node node2.pancou.com

property cib-bootstrap-options: \

dc-version=1.1.14-8.el6-70404b0 \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes=2

               [root@node1 ~]# crm ra classes

lsb

ocf / .isolation heartbeat linbit pacemaker

service

stonith

[root@node1 ~]# crm ra list ocf linbit

drbd   

[root@node1 ~]# crm ra info ocf:linbit:drbd

Manages a DRBD device as a Master/Slave resource (ocf:linbit:drbd)


This resource agent manages a DRBD resource as a master/slave resource.

DRBD is a shared-nothing replicated storage device.

Note that you should configure resource level fencing in DRBD,

this cannot be done from this resource agent.

See the DRBD User's Guide for more information.

http://www.drbd.org/docs/applications/


Parameters (*: required, []: default):


drbd_resource* (string): drbd resource name

    The name of the drbd resource from the drbd.conf file.


drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf

    Full path to the drbd.conf file.


stop_outdates_secondary (boolean, [false]): outdate a secondary on stop

    Recommended setting: until pacemaker is fixed, leave at default (disabled).

    

    Note that this feature depends on the passed in information in

    OCF_RESKEY_CRM_meta_notify_master_uname to be correct, which unfortunately is

    not reliable for pacemaker versions up to at least 1.0.10 / 1.1.4.

    

    If a Secondary is stopped (unconfigured), it may be marked as outdated in the

    drbd meta data, if we know there is still a Primary running in the cluster.

    Note that this does not affect fencing policies set in drbd config,

    but is an additional safety feature of this resource agent only.

    You can enable this behaviour by setting the parameter to true.

    

    If this feature seems to not do what you expect, make sure you have defined

    fencing policies in the drbd configuration as well.


Operations' defaults (advisory minimum):


    start         timeout=240

    promote       timeout=90

    demote        timeout=90

    notify        timeout=90

    stop          timeout=100

    monitor_Slave timeout=20 interval=20

    monitor_Master timeout=20 interval=10

         

         [root@node1 ~]# crm configure property stonith-enabled=false

         [root@node1 ~]# crm configure property no-quorum-policy=ignore

         [root@node1 ~]# crm configure rsc_defaults resource-stickiness=100

[root@node1 ~]# crm

crm(live)# configure show

node node1.pancou.com

node node2.pancou.com

property cib-bootstrap-options: \

dc-version=1.1.14-8.el6-70404b0 \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes=2 \

--help \

--list \

stonith-enabled=false \

no-quorum-policy=ignore

rsc_defaults rsc-options: \

resource-stickiness=100


drbd需要同時運行在兩個節點上,但只能有一個節點(primary/secondary模型)是Master,而另一個節點爲Slave;因此,它是一種比較特殊的集羣資源,其資源類型爲多態(Multi-state)clone類型,即主機節點有Master和Slave之分,且要求服務剛啓動時兩個節點都處於slave狀態。


# 定義資源,切換到configure中,mysqlstore定義資源名,drbd_resource=mystorage這個是drbd名,後面定義的都是一些監控

crm(live)configure# primitive mysqlstore ocf:linbit:drbd params

drbd_resource=mystorage op monitor role=Master interval=30s timeout=20s op monitor role=Slave interval=60s timeout=20s op start timeout=240s op stop timeout=100s

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

    

    # 定義主資源

    crm(live)configure# master ms_mysqlstore mysqlstore meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="True"

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# commit

crm(live)configure# show

node node1.pancou.com

node node2.pancou.com

primitive mysqlstore ocf:linbit:drbd \

params drbd_resource=mystorage \

op monitor role=Master interval=30s timeout=20s \

op monitor role=Slave interval=60s timeout=20s \

op start timeout=240s interval=0 \

op stop timeout=100s interval=0

ms ms_mysqlstore mysqlstore \

meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=True

property cib-bootstrap-options: \

dc-version=1.1.14-8.el6-70404b0 \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes=2 \

--help \

--list \

stonith-enabled=false \

no-quorum-policy=ignore

rsc_defaults rsc-options: \

resource-stickiness=100

crm(live)configure# cd 

crm(live)# status

Last updated: Fri Aug 12 04:29:00 2016 Last change: Fri Aug 12 04:28:57 2016 by root via cibadmin on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 2 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]


          定義共享文件的資源,並且讓服務器切換時可以自動掛載,device=/dev/drbd0掛載的設備,directory=/drbd掛載點,

          兩個掛載點的文件名要一致:

          crm(live)configure# primitive mysqlfs ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/data fstype=ext4 op monitor interval=30s timeout=40s on-fail=restart op start timeout=60s op stop timeout=60s 

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

           

           先不要提交,接着定義排列約束:

           crm(live)configure# collocation mysqlfs_with_ms_mysqlstore_master inf: mysqlfs ms_mysqlstore:Master

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

           

           定義順序約束:

           crm(live)configure# order mysqlfs_after_ms_mysqlstore_master mandatory: ms_mysqlstore:promote mysqlfs:start

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

           

           crm(live)# configure show

node node1.pancou.com

node node2.pancou.com

primitive mysqlfs Filesystem \

        params device="/dev/drbd0" directory="/data" fstype=ext4 \

        op monitor interval=30s timeout=40s on-fail=restart \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0

primitive mysqlstore ocf:linbit:drbd \

        params drbd_resource=mystorage \

        op monitor role=Master interval=30s timeout=20s \

        op monitor role=Slave interval=60s timeout=20s \

        op start timeout=240s interval=0 \

        op stop timeout=100s interval=0

ms ms_mysqlstore mysqlstore \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=True

order mysqlfs_after_ms_mysqlstore_master Mandatory: ms_mysqlstore:promote mysqlfs:start

colocation mysqlfs_with_ms_mysqlstore_master inf: mysqlfs ms_mysqlstore:Master

property cib-bootstrap-options: \

        dc-version=1.1.14-8.el6-70404b0 \

        cluster-infrastructure="classic openais (with plugin)" \

        expected-quorum-votes=2 \

        --help \

        --list \

        stonith-enabled=false \

        no-quorum-policy=ignore

rsc_defaults rsc-options: \

        resource-stickiness=100


        crm(live)# status

Last updated: Fri Aug 12 05:06:55 2016 Last change: Fri Aug 12 05:04:20 2016 by root via cibadmin on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com


 [root@node1 ~]# ls /data/

inittab  lost+found

 [root@node1 ~]# cat /data/inittab 

 ..........

 ......

#   6 - reboot (Do NOT set initdefault to this)

id:3:initdefault:


下面我們模擬node1節點故障,看此些資源可否正確轉移至node2。

以下命令在Node1上執行:

# crm node standby node1.pancou.com


[root@node1 ~]# crm node standby

[root@node1 ~]# crm status

Last updated: Fri Aug 12 05:13:59 2016 Last change: Fri Aug 12 05:14:01 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com


 在node2上驗證是否成功

 root@node2 ~]# crm status

Last updated: Fri Aug 12 05:16:39 2016 Last change: Fri Aug 12 05:14:01 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

 [root@node2 corosync]# cd

[root@node2 ~]# ls /data/

inittab  lost+found

[root@node2 ~]# cat /data/inittab 

..........

 ......

#   6 - reboot (Do NOT set initdefault to this)

id:3:initdefault:


由上面的信息可以推斷出,node1已經轉入standby模式,其drbd服務已經停止,但故障轉移已經完成,所有資源已經正常轉移至node2。


讓node1重新上線:

# crm node online node1.pancou.com


[root@node1 ~]# crm node online

[root@node1 ~]# crm status

Last updated: Fri Aug 12 05:19:32 2016 Last change: Fri Aug 12 05:19:35 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Slaves: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com


 三、mysql部分配置

            [root@node2 mariadb-10.1.16]# crm status

Last updated: Fri Aug 12 08:44:44 2016 Last change: Fri Aug 12 08:44:36 2016 by root via crm_attribute on node2.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

     

     1、node2 mysql的安裝和設置

            

       1)、解決依賴關係

           [root@node2 ~]# yum install make  cmake 

           bison ncurses-devel autoconf automake curl curl-devel gcc gcc-c++ gtk+-devel zlib-devel openssl openssl-devel pcre-devel  perl kernel-headers compat*  cpp glibc libgomp libstdc++-devel keyutils-libs-devel libsepol-devel libselinux-devel krb5-devel  libXpm* freetype freetype-devel freetype* fontconfig fontconfig-devel libpng* ncurses* libtool* libxml2-devel  bison libaio-devel


        2)、解壓安裝

           [root@node2 ~]# tar -zxvf mariadb-10.1.16.tar.gz -C /usr/local/src/

           [root@node2 ~]# cd /usr/local/src/mariadb-10.1.16/

           [root@node2 mariadb-10.1.16]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DSYSCONFDIR=/etc -DMYSQL_DATADIR=/data/mysql -DMYSQL_UNIX_ADDR=/var/lib/mysql/mysql.sock -DMYSQL_USER=mysql -DMYSQL_TCP_PORT=3306 -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all -DWITH_MYISAM_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_MEMORY_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DENABLED_LOCAL_INFILE=1   -DWITH_SSL=system

           [root@node2 mariadb-10.1.16]# make && make install

        

        3)、創建mysql用戶   

           [root@node2 mariadb-10.1.16]# useradd -r -s /sbin/nologin mysql

           [root@node2 mariadb-10.1.16]# id mysql

           uid=498(mysql) gid=498(mysql) groups=498(mysql)

        

        4)、設置數據目錄的權限(就是掛載drbd的目錄)

           [root@node2 mariadb-10.1.16]# mkdir -p /data/mysql

           [root@node2 mariadb-10.1.16]# mkdir -p /data/logs               -------node1上不要再創建


           [root@node2 mariadb-10.1.16]# chown -R mysql:mysql /data/


        5)、設置配置文件及服務

           [root@node2 mariadb-10.1.16]# cp support-files/my-large.cnf /etc/my.cnf

           [root@node2 mariadb-10.1.16]# cp support-files/mysql.server /etc/init.d/mysqld

           [root@node2 mariadb-10.1.16]# chmod +x /etc/init.d/mysqld 

           [root@node2 mariadb-10.1.16]# ll /etc/init.d/mysqld 

           -rwxr-xr-x 1 root root 12549 Aug 12 08:33 /etc/init.d/mysqld

           [root@node2 mariadb-10.1.16]# chkconfig --add mysqld

           [root@node2 mariadb-10.1.16]# chkconfig --list mysqld

           mysqld         0:off 1:off 2:on 3:on 4:on 5:on 6:off

           [root@node2 mariadb-10.1.16]# chkconfig mysqld off

           [root@node2 mariadb-10.1.16]# chkconfig --list mysqld

           mysqld         0:off 1:off 2:off 3:off 4:off 5:off 6:off

           [root@node2 mariadb-10.1.16]# vim /etc/init.d/mysqld

            basedir=/usr/local/mysql

            datadir=/data/mysql


        6)、mysql 初始化

           [root@node2 mariadb-10.1.16]# /usr/local/mysql/scripts/mysql_install_db \

--defaults-file=/etc/my.cnf \

--basedir=/usr/local/mysql \

--datadir=/data/mysql \

--user=mysql

                                    ---------------注意node1上不要初始化

        7)、啓動mysqld服務

            [root@node2 mariadb-10.1.16]# service mysqld start

            Starting MySQL.. SUCCESS! 


        8)、設置環境變量

            [root@node2 mariadb-10.1.16]# vim /etc/profile.d/mysql.sh

            export PATH=/usr/local/mysql/bin:$PATH


            [root@node2 mariadb-10.1.16]# source /etc/profile.d/mysql.sh

            [root@node2 mariadb-10.1.16]# echo $PATH

            /usr/local/mysql/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin


        9)、登錄測試

           [root@node2 mariadb-10.1.16]# mysql

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 4

Server version: 10.1.16-MariaDB Source distribution


Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| test               |

+--------------------+

4 rows in set (0.00 sec)

            

            [root@node2 mariadb-10.1.16]# service mysqld stop            

            Shutting down MySQL.. SUCCESS! 


            [root@node2 mariadb-10.1.16]# crm node standby node2.pancou.com

            [root@node2 mariadb-10.1.16]# crm node online node2.pancou.com

            [root@node2 mariadb-10.1.16]# crm status

Last updated: Fri Aug 12 09:05:33 2016 Last change: Fri Aug 12 09:05:29 2016 by root via crm_attribute on node2.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 3 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com


        2、node1 mysql的安裝和設置

               [root@node1 mariadb-10.1.16]# ls /data/

               inittab  logs  lost+found  mysql

           1)、mysql安裝設置和node2 相同

               [root@node1 mariadb-10.1.16]# useradd -r -s /sbin/nologin mysql

               [root@node1 mariadb-10.1.16]# id mysql

                uid=498(mysql) gid=498(mysql) groups=498(mysql)

               [root@node1 mariadb-10.1.16]# ll /data/

               total 28

               -rw-r--r-- 1 mysql mysql   884 Aug 12 00:54 inittab

               drwxr-xr-x 2 mysql mysql  4096 Aug 12 08:46 logs

               drwx------ 2 mysql mysql 16384 Aug 12 00:39 lost+found

               drwxr-xr-x 5 mysql mysql  4096 Aug 12 09:02 mysql

               [root@node1 mariadb-10.1.16]# ll -d /data/

               drwxr-xr-x 5 mysql mysql 4096 Aug 12 08:46 /data/


               [root@node1 mariadb-10.1.16]# cp support-files/my-large.cnf /etc/my.cnf

               [root@node2 mariadb-10.1.16]# scp /etc/init.d/mysqld node1:/etc/init.d/

mysqld                                                                                 100%   12KB  12.3KB/s   00:00

               [root@node1 mariadb-10.1.16]# ll /etc/init.d/mysqld 

               -rwxr-xr-x 1 root root 12576 Aug 12 09:13 /etc/init.d/mysqld

               [root@node1 mariadb-10.1.16]# chkconfig --add mysqld 

               [root@node1 mariadb-10.1.16]# chkconfig mysqld off

               [root@node1 mariadb-10.1.16]# chkconfig --list mysqld 

               mysqld         0:off 1:off 2:off 3:off 4:off 5:off 6:off

               

               [root@node1 mariadb-10.1.16]# service mysqld start

                Starting MySQL.... SUCCESS! 

               [root@node1 mariadb-10.1.16]# vim /etc/profile.d/mysql.sh

               export PATH=/usr/local/mysql/bin:$PATH

               [root@node1 mariadb-10.1.16]# source /etc/profile.d/mysql.sh 

               [root@node1 mariadb-10.1.16]# echo $PATH

               /usr/local/mysql/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

               

               [root@node1 mariadb-10.1.16]# mysql

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 4

Server version: 10.1.16-MariaDB Source distribution


Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| test               |

+--------------------+

4 rows in set (0.07 sec)

               

               [root@node1 mariadb-10.1.16]# service mysqld stop

               Shutting down MySQL... SUCCESS! 



        3、配置pacemaker+mysql


           1)、增加mysql資源 


            [root@node2 ~]# crm ra info ocf:heartbeat:mysql

            Manages a MySQL database instance (ocf:heartbeat:mysql)


Resource script for MySQL.

May manage a standalone MySQL database, a clone set with externally

managed replication, or a complete master/slave replication setup.


While managing replication, the default behavior is to use uname -n 

values in the change master to command.  Other IPs can be specified 

manually by adding a node attribute _mysql_master_IP

giving the IP to use for replication.  For example, if the mysql primitive 

you are using is p_mysql, the attribute to set will be 

p_mysql_mysql_master_IP.


Parameters (*: required, []: default):


binary (string, [/usr/local/mysql/bin/mysqld_safe]): MySQL server binary

    Location of the MySQL server binary


client_binary (string, [mysql]): MySQL client binary

    Location of the MySQL client binary


config (string, [/etc/my.cnf]): MySQL config

    Configuration file


datadir (string, [/var/lib/mysql]): MySQL datadir

            

[root@node1 ~]# crm 

crm(live)# configure

crm(live)configure# primitive mysqld ocf:heartbeat:mysql binary="/usr/local/mysql/bin/mysqld_safe" config="/etc/my.cnf" datadir="/data/mysql"  pid="/data/mysql/mysql.pid" socket="/var/lib/mysql/mysql.sock" op start timeout=180s op stop timeout=180s op monitor interval=20s timeout=60s on-fail=restart

crm(live)configure# verify                                       --有點問題

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# show

node node1.pancou.com \

        attributes standby=off

node node2.pancou.com \

        attributes standby=off

primitive mysqld mysql \

        params binary="/usr/local/mysql/bin/mysqld_safe" config="/etc/my.cnf" datadir="/data/mysql" pid="/data/mysql/mysql.pid" socket="/var/lib/mysql/mysql.sock" \

        op start timeout=180s interval=0 \

        op stop timeout=180s interval=0 \

        op monitor interval=20s timeout=60s on-fail=restart

primitive mysqlfs Filesystem \

        params device="/dev/drbd0" directory="/data" fstype=ext4 \

        op monitor interval=30s timeout=40s on-fail=restart \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0

primitive mysqlstore ocf:linbit:drbd \

        params drbd_resource=mystorage \

        op monitor role=Master interval=30s timeout=20s \

        op monitor role=Slave interval=60s timeout=20s \

        op start timeout=240s interval=0 \

        op stop timeout=100s interval=0

         

crm(live)configure# delete mysqld

INFO: hanging colocation:mysqld_with_mysqlfs deleted

crm(live)configure# commit

crm(live)configure# primitive mysqld lsb:mysqld

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# colocation mysqld_with_mysqlfs inf: mysqld mysqlfs

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Fri Aug 12 10:15:25 2016 Last change: Fri Aug 12 10:16:03 2016 by root via cibadmin on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 4 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

crm(live)configure# colocation mysqld_with_mysqlfs inf: mysqld mysqlfs 

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# order mysqld_after_mysqlfs mandatory: mysqlfs mysqld

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# show

node node1.pancou.com \

        attributes standby=off

node node2.pancou.com \

        attributes standby=off

primitive mysqld lsb:mysqld

primitive mysqlfs Filesystem \

        params device="/dev/drbd0" directory="/data" fstype=ext4 \

        op monitor interval=30s timeout=40s on-fail=restart \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0

primitive mysqlstore ocf:linbit:drbd \

        params drbd_resource=mystorage \

        op monitor role=Master interval=30s timeout=20s \

        op monitor role=Slave interval=60s timeout=20s \

        op start timeout=240s interval=0 \

        op stop timeout=100s interval=0

ms ms_mysqlstore mysqlstore \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=True

order mysqld_after_mysqlfs Mandatory: mysqlfs mysqld

colocation mysqld_with_mysqlfs inf: mysqld mysqlfs

order mysqlfs_after_ms_mysqlstore_master Mandatory: ms_mysqlstore:promote mysqlfs:start

colocation mysqlfs_with_ms_mysqlstore_master inf: mysqlfs ms_mysqlstore:Master

property cib-bootstrap-options: \

        dc-version=1.1.14-8.el6-70404b0 \

        cluster-infrastructure="classic openais (with plugin)" \


crm(live)configure# commit

           

       2)、增加VIP資源

crm(live)configure# primitive VIP ocf:heartbeat:IPaddr2 ip="192.168.110.150" cidr_netmask=32 nic=eth0 op monitor interval=30s

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# colocation VIP_with_mysqld inf: VIP mysqld

crm(live)configure# verify

ERROR: cib-bootstrap-options: attribute --help does not exist

ERROR: cib-bootstrap-options: attribute --list does not exist

crm(live)configure# show

node node1.pancou.com \

        attributes standby=off

node node2.pancou.com \

        attributes standby=off

primitive VIP IPaddr2 \

        params ip=192.168.110.150 cidr_netmask=32 nic=eth0 \

        op monitor interval=30s

primitive mysqld lsb:mysqld

primitive mysqlfs Filesystem \

        params device="/dev/drbd0" directory="/data" fstype=ext4 \

        op monitor interval=30s timeout=40s on-fail=restart \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0

primitive mysqlstore ocf:linbit:drbd \

        params drbd_resource=mystorage \

        op monitor role=Master interval=30s timeout=20s \

        op monitor role=Slave interval=60s timeout=20s \

        op start timeout=240s interval=0 \

        op stop timeout=100s interval=0

ms ms_mysqlstore mysqlstore \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=True

colocation VIP_with_mysqld inf: VIP mysqld

order mysqld_after_mysqlfs Mandatory: mysqlfs mysqld

colocation mysqld_with_mysqlfs inf: mysqld mysqlfs

order mysqlfs_after_ms_mysqlstore_master Mandatory: ms_mysqlstore:promote mysqlfs:start

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Sat Aug 13 02:04:15 2016 Last change: Sat Aug 13 02:04:53 2016 by root via cibadmin on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com 


crm(live)configure# order VIP_before_ms_mysqlstore_master mandatory: VIP ms_mysqlstore:promote mysqlfs:start

crm(live)configure# verify

crm(live)configure# show

node node1.pancou.com \

        attributes standby=off

node node2.pancou.com \

        attributes standby=off

primitive VIP IPaddr2 \

        params ip=192.168.110.150 cidr_netmask=32 nic=eth0 \

        op monitor interval=30s

primitive mysqld lsb:mysqld

primitive mysqlfs Filesystem \

        params device="/dev/drbd0" directory="/data" fstype=ext4 \

        op monitor interval=30s timeout=40s on-fail=restart \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0

primitive mysqlstore ocf:linbit:drbd \

        params drbd_resource=mystorage \

        op monitor role=Master interval=30s timeout=20s \

        op monitor role=Slave interval=60s timeout=20s \

        op start timeout=240s interval=0 \

        op stop timeout=100s interval=0

ms ms_mysqlstore mysqlstore \

        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=True

order VIP_before_ms_mysqlstore_master Mandatory: VIP ms_mysqlstore:promote mysqlfs:start

colocation VIP_with_mysqld inf: VIP mysqld

order mysqld_after_mysqlfs Mandatory: mysqlfs mysqld

colocation mysqld_with_mysqlfs inf: mysqld mysqlfs

order mysqlfs_after_ms_mysqlstore_master Mandatory: ms_mysqlstore:promote mysqlfs:start

colocation mysqlfs_with_ms_mysqlstore_master inf: mysqlfs ms_mysqlstore:Master

property cib-bootstrap-options: \

        dc-version=1.1.14-8.el6-70404b0 \

        cluster-infrastructure="classic openais (with plugin)" \

        expected-quorum-votes=2 \

        --help \

        --list \

        stonith-enabled=false \

        no-quorum-policy=ignore

rsc_defaults rsc-options: \

        resource-stickiness=100


crm(live)configure# commit        


    6)、模擬故障轉移


        模擬node1離線:


[root@node1 ~]# crm node standby node1.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:15:26 2016 Last change: Sat Aug 13 02:15:59 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

 mysqld (lsb:mysqld): Stopped

 VIP (ocf::heartbeat:IPaddr2): Started node2.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:16:05 2016 Last change: Sat Aug 13 02:15:59 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

 mysqld (lsb:mysqld): Started node2.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node2.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms

             

       node1上線


       [root@node1 ~]# crm node online node1.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:19:21 2016 Last change: Sat Aug 13 02:19:58 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Slaves: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

 mysqld (lsb:mysqld): Started node2.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node2.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms




    模擬node2離線

    [root@node1 ~]# crm node standby node2.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:20:37 2016 Last change: Sat Aug 13 02:21:11 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node2.pancou.com: standby

Online: [ node1.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Stopped: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Stopped

 mysqld (lsb:mysqld): Stopped

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


[root@node1 ~]# ls /data/

inittab  logs  lost+found  mysql

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:20:53 2016 Last change: Sat Aug 13 02:21:11 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node2.pancou.com: standby

Online: [ node1.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Stopped: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms

          

          node2 上線

    [root@node1 ~]# crm node online node2.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:22:11 2016 Last change: Sat Aug 13 02:22:46 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


        好了,到這裏所有的資源配置全部完成,下面我們進行測試一下。


      7)、測試mysql高可用


          [root@node1 ~]# mysql

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 4

Server version: 10.1.16-MariaDB Source distribution


Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> grant all on *.* to pancou@'%' identified by 'pancou';

Query OK, 0 rows affected (0.00 sec)


MariaDB [(none)]> show grants for pancou;

+----------------------------------------------------------------------------------------------------------------+

| Grants for pancou@%                                                                                            |

+----------------------------------------------------------------------------------------------------------------+

| GRANT ALL PRIVILEGES ON *.* TO 'pancou'@'%' IDENTIFIED BY PASSWORD '*48D10DDED8C5DDCE0BFD4C7BB6EF82DA15159F83' |

+----------------------------------------------------------------------------------------------------------------+

1 row in set (0.00 sec)


           在遠程上登錄

            [root@node2 ~]# mysql -h 192.168.110.150 -upancou -p 

Enter password: 

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 5

Server version: 10.1.16-MariaDB Source distribution


Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


MariaDB [(none)]> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| test               |

+--------------------+

4 rows in set (0.05 sec)


           模擬故障

           [root@node1 ~]# crm status

Last updated: Sat Aug 13 02:35:51 2016 Last change: Sat Aug 13 02:22:46 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


        [root@node1 ~]# crm node standby node1.pancou.com

        [root@node1 ~]# crm status

Last updated: Sat Aug 13 02:38:04 2016 Last change: Sat Aug 13 02:37:38 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node1.pancou.com: standby

Online: [ node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node2.pancou.com ]

     Stopped: [ node1.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node2.pancou.com

 mysqld (lsb:mysqld): Started node2.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node2.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


        測試mysql

        MariaDB [(none)]> show databases;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    4

Current database: *** NONE ***


+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| test               |

+--------------------+

4 rows in set (15.07 sec)

            

          [root@node1 ~]# crm node standby node2.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:40:41 2016 Last change: Sat Aug 13 02:41:16 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node2.pancou.com: standby

Online: [ node1.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Slaves: [ node1.pancou.com ]

     Stopped: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Stopped

 mysqld (lsb:mysqld): Stopped

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:40:46 2016 Last change: Sat Aug 13 02:41:16 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Node node2.pancou.com: standby

Online: [ node1.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Stopped: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com



Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms

          

           MariaDB [(none)]> show databases;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    4

Current database: *** NONE ***


+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| test               |

+--------------------+

4 rows in set (0.03 sec)

           [root@node1 ~]# crm node online node2.pancou.com

[root@node1 ~]# crm status

Last updated: Sat Aug 13 02:44:01 2016 Last change: Sat Aug 13 02:44:37 2016 by root via crm_attribute on node1.pancou.com

Stack: classic openais (with plugin)

Current DC: node2.pancou.com (version 1.1.14-8.el6-70404b0) - partition with quorum

2 nodes and 5 resources configured, 2 expected votes


Online: [ node1.pancou.com node2.pancou.com ]


Full list of resources:


 Master/Slave Set: ms_mysqlstore [mysqlstore]

     Masters: [ node1.pancou.com ]

     Slaves: [ node2.pancou.com ]

 mysqlfs (ocf::heartbeat:Filesystem): Started node1.pancou.com

 mysqld (lsb:mysqld): Started node1.pancou.com

 VIP (ocf::heartbeat:IPaddr2): Started node1.pancou.com


Failed Actions:

* mysqlstore_monitor_30000 on node1.pancou.com 'unknown error' (1): call=133, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:39:29 2016', queued=0ms, exec=0ms

* mysqlstore_monitor_60000 on node2.pancou.com 'unknown error' (1): call=111, status=Timed Out, exitreason='none',

    last-rc-change='Fri Aug 12 07:30:40 2016', queued=0ms, exec=0ms


            MariaDB [(none)]> select host,user,password from mysql.user;

+------------------+--------+-------------------------------------------+

| host             | user   | password                                  |

+------------------+--------+-------------------------------------------+

| localhost        | root   |                                           |

| node2.pancou.com | root   |                                           |

| 127.0.0.1        | root   |                                           |

| ::1              | root   |                                           |

| localhost        |        |                                           |

| node2.pancou.com |        |                                           |

| %                | pancou | *48D10DDED8C5DDCE0BFD4C7BB6EF82DA15159F83 |

+------------------+--------+-------------------------------------------+

7 rows in set (0.01 sec)



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章