前提:
本配置共有兩個測試節點,分別node1和node2,相的IP地址分別爲202.207.178.6和202.207.178.7,管理節點202.207.178.8,對node1和node2進行配置。此時已經配置好drbd,並且可以正常工作了!
(爲避免影響,先關閉防火牆和SElinux,DRBD相關配置詳見http://10927734.blog.51cto.com/10917734/1867283)
一、安裝corosync
1、先停止drbd服務,並禁止其開機自動啓動
主節點:
[root@node2 ~]# umount /mydata/
[root@node2 ~]# drbdadm secondary mydrbd
[root@node2 ~]# service drbd stop
[root@node2 ~]# chkconfig drbd off
從節點:
[root@node1 ~]# service drbd stop
[root@node1 ~]# chkconfig drbd off
2、安裝相關軟件包
[root@fsy ~]# for I in {1..2}; do ssh node$I 'mkdir /root/corosync/'; scp *.rpm node$I:/root/corosync; ssh node$I 'yum -y --nogpgcheck localinstall /root/corosync/*.rpm'; done
(將heartbeat-3.0.4-2.el6.i686.rpm和heartbeat-libs-3.0.4-2.el6.i686.rpm複製到主目錄下進行)
[root@fsy ~]# for I in {1..2}; do ssh node$I 'yum -y install cluster-glue corosync libesmtp pacemaker pacemaker-cts'; done
3、創建所需日誌目錄
[root@node1 corosync]# mkdir /var/log/cluster
[root@node2 ~]# mkdir /var/log/cluster
4、配置corosync,(以下命令在node1上執行),並嘗試啓動
# cd /etc/corosync
# cp corosync.conf.example corosync.conf
接着編輯corosync.conf,添加如下內容:
修改以下語句:
bindnetaddr: 202.207.178.0 #網絡地址,節點所在的網絡地址段
secauth: on #打開安全認證
threads: 2 #啓動的線程數
to_syslog: no (不在默認位置記錄日誌)
添加如下內容,定義pacemaker隨corosync啓動,並且定義corosync的工作用戶和組:
service {
ver: 0
name: pacemaker
}
aisexec {
user: root
group: root
}
生成節點間通信時用到的認證密鑰文件:
# corosync-keygen
將corosync和authkey複製至node2:
# scp -p corosync.conf authkey node2:/etc/corosync/
嘗試啓動,(以下命令在node1上執行):
# service corosync start
注意:啓動node2需要在node1上使用如上命令進行,不要在node2節點上直接啓動
# ssh node2 '/etc/init.d/corosync start'
5、測試是否正常
查看corosync引擎是否正常啓動:
# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
輸出以下內容:
Oct 23 00:38:06 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Oct 23 00:38:06 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'
查看初始化成員節點通知是否正常發出:
# grep TOTEM /var/log/cluster/corosync.log
輸出如下內容:
Oct 23 00:38:06 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 23 00:38:06 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 23 00:38:06 corosync [TOTEM ] The network interface [202.207.178.6] is now up.
Oct 23 00:39:35 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
檢查啓動過程中是否有錯誤產生:
# grep ERROR: /var/log/messages | grep -v unpack_resources
查看pacemaker是否正常啓動:
# grep pcmk_startup /var/log/cluster/corosync.log
輸出如下內容:
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Oct 23 00:38:06 corosync [pcmk ] Logging: Initialized pcmk_startup
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Service: 9
Oct 23 00:38:06 corosync [pcmk ] info: pcmk_startup: Local hostname: node1
使用如下命令查看集羣節點的啓動狀態:
# crm_mon
Last updated: Tue Oct 25 17:28:10 2016 Last change: Tue Oct 25 17:21:56 2016 by hacluster via crmd on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 0 resources configured, 2 expected votes
Online: [ node1 node2 ]
從上面的信息可以看出兩個節點都已經正常啓動,並且集羣已經處於正常工作狀態。
二、配置資源及約束
1、安裝crmsh軟件包:
pacemaker本身只是一個資源管理器,我們需要一個接口才能對pacemker上的資源進行定義與管理,而crmsh即是pacemaker的配置接口,從pacemaker 1.1.8開始,crmsh 發展成一個獨立項目,
pacemaker中不再提供。crmsh提供了一個命令行的交互接口來對Pacemaker集羣進行管理,它具有更強大的管理功能,同樣也更加易用,在更多的集羣上都得到了廣泛的應用,類似軟件還有 pcs;
在/etc/yum.repo.d/ 下的配置文件中添加以下內容
[ewai]
name=aaa
baseurl=http://download.opensuse.org/repositories/network:/ha- clustering:/Stable/CentOS_CentOS-6/
enabled=1
gpgcheck=0
# yum clean all
# yum makecache
[root@node1 yum.repos.d]# yum install crmsh
2、檢查配置文件有無語法錯誤,並進行相關配置
crm(live)configure# verify
我們裏可以通過如下命令先禁用stonith:
# crm configure property stonith-enabled=false
或 crm(live)configure# property stonith-enabled=false
crm(live)configure# commit
配置不具備法定票數的處理方式:
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit
配置資源粘性,使資源更願意留在當前節點
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
3、配置資源
定義一個名爲mysqldrbd的資源:
(interval:定義監控的時間間隔)
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30
crm(live)configure# verify
定義一個名爲ms_mysqldrbd的主從類型的資源:
指明是mysqldrbd的克隆,master-max=1:定義最多出現1個主資源,master-node-max=1:主資源在同一時刻只能出現在一個節點上,clone-max=2:定義最多有兩個克隆資源,clone-node-max:定義在每個節點上只能啓動1個克隆資源
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify
crm(live)configure# commit
4、測試
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:05:43 2016 Last change: Sun Oct 23 13:03:52 2016 by root via cibadmin on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [ node1 node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node1 ]
Slaves: [ node2 ]
[root@node1 ~]# drbd-overview
0:mydrbd Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node1 ~]# crm node standby
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:06:30 2016 Last change: Sun Oct 23 13:06:25 2016 by root via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Node node1: standby
Online: [ node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node2 ]
Stopped: [ node1 ]
[root@node1 ~]# crm node online
[root@node1 ~]# crm status
Last updated: Sun Oct 23 13:07:00 2016 Last change: Sun Oct 23 13:06:58 2016 by root via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [ node1 node2 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node2 ]
Slaves: [ node1 ]
服務正常!
5、配置一個文件系統資源,使DRBD自動掛載,並配置排列約束,使此資源和主節點在一起;同時配置一個順序約束,實現先啓動drbd,再啓動mystor
crm(live)configure# primitive mystore ocf:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext4 op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start
crm(live)configure# verify
crm(live)configure# commit
測試:
[root@node2 ~]# crm node standby
[root@node2 ~]# crm status
Last updated: Sun Oct 23 13:45:26 2016 Last change: Sun Oct 23 13:45:20 2016 by root via crm_attribute on node2
Stack: classic openais (with plugin)
Current DC: node2 (version 1.1.14-8.el6_8.1-70404b0) - partition with quorum
2 nodes and 3 resources configured, 2 expected votes
Node node2: standby
Online: [ node1 ]
Full list of resources:
Master/Slave Set: ms_mysqldrbd [mysqldrbd]
Masters: [ node1 ]
Stopped: [ node2 ]
mystore (ocf::heartbeat:Filesystem): Started node1
[root@node1 yum.repos.d]# ls /mydata/
fsy lost+found
此時測試,一切正常!
三、安裝Mysql(先在主節點上,後在從節點上)
1.將下載好的壓縮包解壓至/usr/local,並進入此目錄
#tar xf mysql-5.5.52-linux2.6-i686.tar.gz -C /usr/local
#cd /usr/local/
2.爲解壓後的目錄創建一個鏈接,並進入此目錄
#ln -sv mysql-5.5.52-linux2.6-i686 mysql
#cd mysql
3.創建MySQL用戶(使其成爲系統用戶)和MySQL組
#groupadd -r -g 306 mysql
#useradd -g 306 -r -u 306 mysql
4.使mysql下的所有文件都屬於mysql用戶和mysql組
#chown -R mysql:mysql /usr/local/mysql/*
5.創建數據目錄,並使其屬於mysql用戶和mysql組,其他人無權限
#mkdir /mydata/data
#chown -R mysql:mysql /mydata/data/
#chmod o-rx /mydata/data/
6.準備就緒,開始安裝
#scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
7.安裝完成後爲了安全,更改/usr/local/mysql下所有文件的權限
#chown -R root:mysql /usr/local/mysql/*
8.準備啓動腳本,並禁止其開機自動啓動
#cp support-files/mysql.server /etc/init.d/mysqld
#chkconfig --add mysqld
#chkconfig mysqld off
9.編輯數據庫配置文件
#cp support-files/my-large.cnf /etc/my.cnf
#vim /etc/my.cnf,修改和添加以下內容:
thread_concurrency =2(因爲我的CPU數爲1,所以線程數改爲2)
datadir = /mydata/data
10.啓動mysql
# service mysqld start
# /usr/local/mysql/bin/mysql
11.測試是否正常
mysql> show databases;
mysql> CREATE DATABASE mydb;
mysql> show databases;
12.關閉主節點上的mysql服務,使從節點變爲主節點,安裝mysql
[root@node1 mysql]# service mysqld stop
[root@node1 mysql]# crm node standby
[root@node1 mysql]# crm node online
13.將下載好的壓縮包解壓至/usr/local,並進入此目錄
#tar xf mysql-5.5.52-linux2.6-i686.tar.gz -C /usr/local
#cd /usr/local/
14.爲解壓後的目錄創建一個鏈接,並進入此目錄
#ln -sv mysql-5.5.52-linux2.6-i686 mysql
#cd mysql
15.創建MySQL用戶(使其成爲系統用戶)和MySQL組
#groupadd -r -g 306 mysql
#useradd -g 306 -r -u 306 mysql
16.使mysql下的所有文件都屬於root用戶和mysql組
#chown -R root:mysql /usr/local/mysql/*
17.準備啓動腳本,並禁止其開機自動啓動
#cp support-files/mysql.server /etc/init.d/mysqld
#chkconfig --add mysqld
#chkconfig mysqld off
18.編輯數據庫配置文件
#cp support-files/my-large.cnf /etc/my.cnf
#vim /etc/my.cnf,修改和添加以下內容:
thread_concurrency =2(因爲我的CPU數爲1,所以線程數改爲2)
datadir = /mydata/data
19.啓動mysql
# service mysqld start
# /usr/local/mysql/bin/mysql
20.測試是否正常
mysql> show databases;
發現有mydb數據庫!
測試成功!
四、配置mysql資源
1、停止主節點上的mysql服務
# service mysqld stop
2、定義主資源
crm(live)configure# primitive mysqld lsb:mysqld
crm(live)configure# verify
3、定義資源約束
定義排列約束,使mysqld和mystore在一起
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore
crm(live)configure# verify
定義順序約束,使mystore先啓動,mysqld後啓動
crm(live)configure# order mysqld_after_mystore mandatory: mystore mysqld
crm(live)configure# verify
crm(live)configure# commit
4、測試
1)在主節點上連接mysql,並創建數據庫
mysql> CREATE DATABASES hellodb;
mysql> show databases;
2)節點切換(主節點上)
# crm node standby
# crm node online
3)在原來的從節點(及現在的主節點上測試)
mysql> show databases;
發現有hellodb數據庫!
測試成功!
至此,drbd+corosync的高可用mysql配置完成!
歡迎批評指正!