corosync+drbd+pacemaker實現mysql服務器的HA羣集


案例應用:紅帽企業羣集和存儲管理之

mysql服務器的HA集羣之corosync+drbd+pacemaker實現

上篇地址:

http://xjzhujunjie.blog.51cto.com/3582724/886317 

下篇地址: 

http://xjzhujunjie.blog.51cto.com/3582724/886323

案例應用企業需求:
某公司需要通過HA實現服務器的高可用性,即通過corosync+drbd+
pacemaker實現mysql服務器的高可用集羣。
案例應用拓撲圖:

案例應用實現步驟:
一.準備工作:

1.1 修改node1.junjie.com主機名,IP地址和系統時間
[root@node1 ~]# hostname
node1.junjie.com
[root@node1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node1.junjie.com
[root@node1 ~]# hwclock -s
[root@node1 ~]# date
Tue Feb 7 21:11:46 CST 2012
[root@node1 ~]#
[root@node1 ~]# setup

[root@node1 ~]# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
[root@node1 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:AE:83:D1
inet addr:192.168.101.81 Bcast:192.168.101.255 Mask:255.255.255.0
1.2 修改node2.junjie.com主機名,IP地址和系統時間
[root@node2 ~]# hostname
node2.junjie.com
[root@node2 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node2.junjie.com
[root@node2 ~]# hwclock -s
[root@node2 ~]# date
Tue Feb 7 21:11:59 CST 2012
[root@node2 ~]#
[root@node2 ~]# setup

[root@node2 ~]# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
[root@node2 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:D1:D4:32
inet addr:192.168.101.82 Bcast:192.168.101.255 Mask:255.255.255.0
1.3 在node1和node2上配置hosts文件和ssh密鑰信息
有利於以後在一個節點對另一節點直接操作。
1.3.1 在node1上修改/etc/hosts文件
[root@node1 ~]# echo "192.168.101.81 node1.junjie.com node1" >>/etc/hosts
[root@node1 ~]# echo "192.168.101.82 node2.junjie.com node2" >>/etc/hosts
1.3.1 在node1上修改/etc/hosts文件
[root@node2 ~]# echo "192.168.101.81 node1.junjie.com node1" >>/etc/hosts
[root@node2 ~]# echo "192.168.101.82 node2.junjie.com node2" >>/etc/hosts
1.3.3 在node1上配置ssh密鑰信息
[root@node1 ~]# ssh-keygen -t rsa #一直輸入空格
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
1.3.4 在node2上配置ssh密鑰信息
[root@node2 ~]# ssh-keygen -t rsa #一直輸入空格
[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
[root@node2 ~]# ssh node1 'ifconfig' #將看到node2上的信息
1.4 下載相關軟件包:(這裏我將下載的軟件包放在/root/ha/下面了,共18個)
資源的下載地址: http://down.51cto.com/data/402802
[root@node1 ~]# cd ha/
[root@node1 ha]#
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
drbd83-8.3.8-1.el5.centos.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
ldirectord-1.0.1-1.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
mysql-5.5.15-linux2.6-i686.tar.gz
openais-1.1.3-1.6.el5.i386.rpm
openaislib-1.1.3-1.6.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
pacemaker-cts-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
perl-TimeDate-1.16-5.el5.noarch.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
這裏不需要ldirectord-1.0.1-1.el5.i386.rpm,故將其刪除
[root@node1 ha]# rm ldirectord-1.0.1-1.el5.i386.rpm
rm: remove regular file `ldirectord-1.0.1-1.el5.i386.rpm'? y
[root@node1 ha]# ssh node2 'mkdir /root/ha'
[root@node1 ha]# scp *.rpm node2:/root/ha/#將相關軟件包移動到node2上
1.5 配置本地yum數據庫

[root@node1 ha]#
[root@node1 ha]# mkdir /mnt/cdrom/
[root@node1 ha]# mount /dev/cdrom /mnt/cdrom/
mount: block device /dev/cdrom is write-protected, mounting read-only
[root@node1 ha]# yum list all
[root@node1 ha]#
[root@node1 ha]# scp /etc/yum.repos.d/server.repo node2:/etc/yum.repos.d/
server.repo 100% 647 0.6KB/s 00:00
[root@node1 ha]#
[root@node1 ha]# ssh node2 'mkdir /mnt/cdrom/'
[root@node1 ha]# ssh node2 'mount /dev/cdrom /mnt/cdrom/'
[root@node1 ha]# ssh node2 'yum list all'
1.6 在node1上新增磁盤
[root@node1 ha]# cd
[root@node1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1318 10482412+ 83 Linux
/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris
[root@node1 ~]# fdisk /dev/sda
p/n/p//+1000M/p/w
[root@node1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1318 10482412+ 83 Linux
/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris
/dev/sda4 1580 1702 987997+ 83 Linux
[root@node1 ~]# partprobe /dev/sda
[root@node1 ~]# cat /proc/partitions
major minor #blocks name

8 0 20971520 sda
8 1 104391 sda1
8 2 10482412 sda2
8 3 2096482 sda3
8 4 987997 sda4
[root@node1 ~]#
1.7 在node2上新增磁盤
[root@node2 ha]# cd
[root@node2 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1318 10482412+ 83 Linux
/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris
[root@node2 ~]# fdisk /dev/sda
p/n/p//+1000M/p/w
[root@node2 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1318 10482412+ 83 Linux
/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris
/dev/sda4 1580 1702 987997+ 83 Linux
[root@node2 ~]# partprobe /dev/sda
[root@node2 ~]# cat /proc/partitions
major minor #blocks name

8 0 20971520 sda
8 1 104391 sda1
8 2 10482412 sda2
8 3 2096482 sda3
8 4 987997 sda4
[root@node2 ~]#
二、DRBD安裝配置步驟
在node1和node2做以下操作:
我下載的軟件包是:(我放在/root/ha/下了)
drbd83-8.3.8-1.el5.centos.i386.rpm
kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
2.1、安裝DRBD 套件
[root@node1 ~]# cd ha/
[root@node1 ha]# ls
[root@node1 ha]# yum localinstall -y drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm --nogpgcheck
[root@node1 ha]# cd

[root@node2 ~]# cd ha/
[root@node2 ha]# ls
[root@node2 ha]# yum localinstall -y drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm --nogpgcheck
[root@node2 ha]# cd
2.2、加載DRBD 模塊
[root@node1 ~]# modprobe drbd
[root@node1 ~]# lsmod |grep drbd
drbd 228528 0
[root@node1 ~]#
[root@node1 ~]# ssh node2 'modprobe drbd'
[root@node1 ~]# ssh node2 'lsmod |grep drbd'
drbd 228528 0
[root@node1 ~]#
2.3、修改配置文件
drbd.conf配置文件DRBD運行時,會讀取一個配置文件/etc/drbd.conf.這個文件裏描述了DRBD設備與硬盤分區的映射關係
2.3.1 在node1上以下配置
[root@node1 ~]# cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc/
cp: overwrite `/etc/drbd.conf'? y
[root@node1 ~]# cat /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf";
include "drbd.d/*.res";
[root@node1 ~]# cd /etc/drbd.d/
[root@node1 drbd.d]# ll
total 4
-rwxr-xr-x 1 root root 1418 Jun 4 2010 global_common.conf
[root@node1 drbd.d]# cp global_common.conf global_common.conf.bak
#修改全局配置文件(說明略)

#修改資源配置文件(說明略)

2.3.2 複製配置到node2上:
[root@node1 drbd.d]# scp /etc/drbd.conf node2:/etc/
drbd.conf 100% 133 0.1KB/s 00:00
[root@node1 drbd.d]# scp /etc/drbd.d/* node2:/etc/drbd.d/
global_common.conf 100% 427 0.4KB/s 00:00
global_common.conf.bak 100% 1418 1.4KB/s 00:00
mysql.res 100% 330 0.3KB/s 00:00
[root@node1 drbd.d]#
2.4、 檢測配置文件, 創建nfs 的資源
// 檢測配置文件(兩次執行如下命令)
[root@node1 drbd.d]# drbdadm adjust mysql
0: Failure: (119) No valid meta-data signature found.

==> Use 'drbdadm create-md res' to initialize meta-data area. <==

Command 'drbdsetup 0 disk /dev/sda4 /dev/sda4 internal --set-defaults --create-device --fencing=resource-only --on-io-error=detach' terminated with exit code 10
[root@node1 drbd.d]# drbdadm adjust mysql
drbdsetup 0 show:5: delay-probe-volume 0k => 0k out of range [4..1048576]k.
[root@node1 drbd.d]# drbdadm create-md mysql
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
[root@node1 drbd.d]# ll /dev/drbd0
brw-r----- 1 root disk 147, 0 Feb 7 21:51 /dev/drbd0

[root@node1 drbd.d]# ssh node2 'drbdadm create-md mysql'
NOT initialized bitmap
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
[root@node1 drbd.d]# ssh node2 'ls -l /dev/drbd0'
brw-rw---- 1 root root 147, 0 Feb 7 21:53 /dev/drbd0
[root@node1 drbd.d]#
2.5 啓動DRBD服務,查看DRBD狀態
[root@node1 drbd.d]# service drbd start
Starting DRBD resources: [
mysql
Found valid meta data in the expected location, 1011703808 bytes into /dev/sda4.
d(mysql) s(mysql) n(mysql) ]outdated-wfc-timeout has to be shorter than degr-wfc-timeout
outdated-wfc-timeout implicitly set to degr-wfc-timeout (120s)
........
[root@node1 drbd.d]# service drbd status
drbd driver loaded OK; device status:
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16
m:res cs ro ds p mounted fstype
0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C
[root@node1 drbd.d]#
[root@node1 drbd.d]# drbd-overview
0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----
[root@node1 drbd.d]#

[root@node2 ha]# service drbd start
Starting DRBD resources: [
mysql
Found valid meta data in the expected location, 1011703808 bytes into /dev/sda4.
d(mysql) s(mysql) n(mysql) ].
[root@node2 ha]# service drbd status
drbd driver loaded OK; device status:
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16
m:res cs ro ds p mounted fstype
0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C
[root@node2 ha]#
[root@node2 ha]# drbd-overview
0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----
[root@node2 ha]#
從上面的信息中可以看出此時兩個節點均處於Secondary狀態。於是,我們接下來需要將其中一個節點設置爲Primary,這裏將node1設置爲主節點,故要在node1上執行如下命令:可以看到文件同步過程。
[root@node1 drbd.d]# drbdadm -- --overwrite-data-of-peer primary mysql
[root@node1 drbd.d]# drbd-overview
0:mysql SyncSource Primary/Secondary UpToDate/Inconsistent C r----
[>...................] sync'ed: 8.3% (909688/987928)K delay_probe: 6
[root@node1 drbd.d]# drbd-overview
0:mysql SyncSource Primary/Secondary UpToDate/Inconsistent C r----
[====>...............] sync'ed: 26.9% (727928/987928)K delay_probe: 23
[root@node1 drbd.d]# drbd-overview
0:mysql SyncSource Primary/Secondary UpToDate/Inconsistent C r----
[============>.......] sync'ed: 67.8% (319864/987928)K delay_probe: 58
[root@node1 drbd.d]# drbd-overview
0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----
[root@node1 drbd.d]#
[root@node1 drbd.d]# cat /proc/drbd
version: 8.3.8 (api:88/proto:86-94)
GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
ns:987928 nr:0 dw:0 dr:987928 al:0 bm:61 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[root@node1 drbd.d]#
注:Primary/Secondary 說明當前節點爲主節點;Secondary/Primary 說明當前節點爲從節點。使用:watch -n 1 'cat /proc/drbd'查看同步過程!
2.6 創建文件系統(只可以在primary節點上進行)
[root@node1 drbd.d]# mkfs -t ext3 /dev/drbd0
[root@node1 drbd.d]# mkdir -pv /mnt/mysqldata
mkdir: created directory `/mnt/mysqldata'
[root@node1 drbd.d]# ssh node2 'mkdir -pv /mnt/mysqldata'
mkdir: created directory `/mnt/mysqldata'
[root@node1 drbd.d]# mount /dev/drbd0 /mnt/mysqldata/
[root@node1 drbd.d]#
[root@node1 drbd.d]# cd /mnt/mysqldata/
[root@node1 mysqldata]# ll
total 16
drwx------ 2 root root 16384 Feb 7 22:33 lost+found
[root@node1 mysqldata]# echo "123" >>f1
[root@node1 mysqldata]# touch f2
[root@node1 mysqldata]# ll
total 20
-rw-r--r-- 1 root root 4 Feb 7 22:35 f1
-rw-r--r-- 1 root root 0 Feb 7 22:36 f2
drwx------ 2 root root 16384 Feb 7 22:33 lost+found
[root@node1 mysqldata]# cd
[root@node1 ~]# umount /mnt/mysqldata/
至此DRBD配置成功!!!
三、mysql安裝與配置
3.1 在node1.junjie.com上安裝配置mysql
[root@node1 ~]# groupadd -r mysql
[root@node1 ~]# useradd -g mysql -r mysql
[root@node1 ~]# drbd-overview
0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----
[root@node1 ~]# mount /dev/drbd0 /mnt/mysqldata/
[root@node1 ~]# mkdir /mnt/mysqldata/data
[root@node1 ~]# chown -R mysql:mysql /mnt/mysqldata/data/
[root@node1 ~]# ll /mnt/mysqldata/
total 24
drwxr-xr-x 2 mysql mysql 4096 Feb 7 22:41 data
-rw-r--r-- 1 root root 4 Feb 7 22:35 f1
-rw-r--r-- 1 root root 0 Feb 7 22:36 f2
drwx------ 2 root root 16384 Feb 7 22:33 lost+found
[root@node1 ~]#
[root@node1 ha]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/
[root@node1 ~]# cd /usr/local/
[root@node1 src]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql
create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'
[root@node1 src]# ll
[root@node1 src]# cd mysql
[root@node1 mysql]# chown mysql.mysql .
[root@node1 mysql]# scripts/mysql_install_db --user=mysql --datadir=/mnt/mysqldata/data/
[root@node1 mysql]# chown -R root .
[root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf
[root@node1 mysql]# vim /etc/my.cnf
39 thread_concurrency = 2
40 datadir = /mnt/mysqldata/data/ #指定mysql數據文件的存放位置
爲mysql提供sysv服務腳本,使其能使用service命令:
[root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
[root@node1 mysql]# scp /etc/my.cnf node2:/etc/
my.cnf 100% 4696 4.6KB/s 00:00
[root@node1 mysql]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/
mysqld 100% 10KB 10.4KB/s 00:00
[root@node1 mysql]#
[root@node1 mysql]# chkconfig --add mysqld
[root@node1 mysql]# chkconfig mysqld off
[root@node1 mysql]# chkconfig --list mysqld
mysqld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@node1 mysql]# service mysqld start
Starting MySQL..... [ OK ]
[root@node1 mysql]# ll /mnt/mysqldata/data/
ib_logfile0 mysql mysql-bin.000003 node1.junjie.com.pid
ib_logfile1 mysql-bin.000001 mysql-bin.index performance_schema
ibdata1 mysql-bin.000002 node1.junjie.com.err test
[root@node1 mysql]#
[root@node1 mysql]# service mysqld stop
Shutting down MySQL. [ OK ]
[root@node1 mysql]#
爲了使用mysql的安裝符合系統使用規範,並將其開發組件導出給系統使用,這裏還需要進行如下步驟:

輸出mysql的man手冊至man命令的查找路徑:添加如下行即可:
[root@node1 mysql]# vim /etc/man.config
142 MANPATH /usr/local/mysql/man
輸出mysql的頭文件至系統頭文件路徑/usr/include,這可以通過簡單的創建鏈接實現:
[root@node1 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql
create symbolic link `/usr/include/mysql' to `/usr/local/mysql/include'
[root@node1 mysql]#
輸出mysql的庫文件給系統庫查找路徑:(文件只要是在/etc/ld.so.conf.d/下並且後綴是.conf就可以)而後讓系統重新載入系統庫
[root@node1 mysql]# echo '/usr/local/mysql/lib' >> /etc/ld.so.conf.d/mysql.conf
[root@node1 mysql]# ldconfig -v |grep mysql
/usr/local/mysql/lib:
libmysqlclient.so.18 -> libmysqlclient_r.so.18.0.0
修改PATH環境變量,讓系統所有用戶可以直接使用mysql的相關命令:
[root@node1 mysql]# vim /etc/profile
59 PATH=$PATH:/usr/local/mysql/bin #添加
[root@node1 mysql]# . /etc/profile
[root@node1 mysql]# echo $PATH
/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin
卸載drbd設備:
[root@node1 mysql]# umount /mnt/mysqldata/
3.2 在node2.junjie.com上安裝配置mysql
添加用戶和組:
[root@node2 ~]# groupadd -r mysql
[root@node2 ~]# useradd -g mysql -r mysql
由於主設備才能讀寫,掛載,故我們還要設置node2爲主設備,node1爲從設備:
node1上操作
[root@node1 ~]# drbdadm secondary mysql
[root@node1 ~]# drbd-overview
0:mysql Connected Secondary/Secondary UpToDate/UpToDate C r----
[root@node1 ~]#
node2上操作:
[root@node2 ~]# drbdadm primary mysql
[root@node2 ~]# drbd-overview
0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----
[root@node2 ~]#
掛載drbd設備:
[root@node2 ~]# mount /dev/drbd0 /mnt/mysqldata/
[root@node2 ~]# ll /mnt/mysqldata/
total 24
drwxr-xr-x 5 mysql mysql 4096 Feb 7 23:13 data
-rw-r--r-- 1 root root 4 Feb 7 22:35 f1
-rw-r--r-- 1 root root 0 Feb 7 22:36 f2
drwx------ 2 root root 16384 Feb 7 22:33 lost+found
[root@node2 ~]#
安裝mysql:
[root@node2 ~]# cd ha/
[root@node2 ha]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/
[root@node2 ha]# cd /usr/local/
[root@node2 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql
create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'
[root@node2 local]# cd mysql
一定不能對數據庫進行初始化,因爲我們在node1上已經初始化了:
[root@node2 mysql]# chown -R root:mysql .
mysql主配置文件和sysc服務腳本已經從node1複製過來了,不用在添加。
管理mysql服務:
[root@node2 mysql]# chkconfig --add mysqld
[root@node2 mysql]# chkconfig mysqld off
[root@node2 mysql]# chkconfig --list mysqld
mysqld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@node2 mysql]#
啓動mysql服務:
[root@node2 mysql]# service mysqld start
Starting MySQL....... [ OK ]
[root@node2 mysql]# ls /mnt/mysqldata/data/
ib_logfile0 mysql-bin.000001 mysql-bin.index performance_schema
ib_logfile1 mysql-bin.000002 node1.junjie.com.err test
ibdata1 mysql-bin.000003 node2.junjie.com.err
mysql mysql-bin.000004 node2.junjie.com.pid
測試之後關閉服務:
[root@node2 mysql]# service mysqld stop
Shutting down MySQL. [ OK ]
[root@node2 mysql]#
爲了使用mysql的安裝符合系統使用規範,並將其開發組件導出給系統使用,這裏還需要進行如下步驟:

輸出mysql的man手冊至man命令的查找路徑:添加如下行即可:
[root@node2 mysql]# vim /etc/man.config
142 MANPATH /usr/local/mysql/man
輸出mysql的頭文件至系統頭文件路徑/usr/include,這可以通過簡單的創建鏈接實現:
[root@node2 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql
create symbolic link `/usr/include/mysql' to `/usr/local/mysql/include'
[root@node2 mysql]#
輸出mysql的庫文件給系統庫查找路徑:(文件只要是在/etc/ld.so.conf.d/下並且後綴是.conf就可以)而後讓系統重新載入系統庫
[root@node2 mysql]# echo '/usr/local/mysql/lib' >> /etc/ld.so.conf.d/mysql.conf
[root@node2 mysql]# ldconfig -v |grep mysql
/usr/local/mysql/lib:
libmysqlclient.so.18 -> libmysqlclient_r.so.18.0.0
修改PATH環境變量,讓系統所有用戶可以直接使用mysql的相關命令:
[root@node2 mysql]# vim /etc/profile
59 PATH=$PATH:/usr/local/mysql/bin #添加
[root@node2 mysql]# . /etc/profile
[root@node2 mysql]# echo $PATH
/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin
卸載drbd設備:
[root@node2 mysql]# umount /mnt/mysqldata/
四、corosync+pacemaker的安裝和配置
4.1安裝
[root@node1 ~]# cd ha/
[root@node1 ha]# yum localinstall -y *.rpm –nogpgcheck

[root@node2 ~]# cd ha/
[root@node2 ha]# yum localinstall -y *.rpm --nogpgcheck

Installed:
cluster-glue.i386 0:1.0.6-1.6.el5 cluster-glue-libs.i386 0:1.0.6-1.6.el5
corosync.i386 0:1.2.7-1.1.el5 corosynclib.i386 0:1.2.7-1.1.el5
heartbeat.i386 0:3.0.3-2.3.el5 heartbeat-libs.i386 0:3.0.3-2.3.el5
libesmtp.i386 0:1.0.4-5.el5 openais.i386 0:1.1.3-1.6.el5
openaislib.i386 0:1.1.3-1.6.el5 pacemaker.i386 0:1.1.5-1.1.el5
pacemaker-cts.i386 0:1.1.5-1.1.el5 pacemaker-libs.i386 0:1.1.5-1.1.el5
perl-TimeDate.noarch 1:1.16-5.el5 resource-agents.i386 0:1.0.4-1.1.el5

Dependency Installed:
libibverbs.i386 0:1.1.2-4.el5 librdmacm.i386 0:1.0.8-5.el5
libtool-ltdl.i386 0:1.5.22-6.1 lm_sensors.i386 0:2.10.7-4.el5
openhpi-libs.i386 0:2.14.0-5.el5 openib.noarch 0:1.4.1-3.el5

Complete!
4.2 對node1節點進行相應的配置
1:切換到主配置文件的目錄
[root@node1 ha]# cd /etc/corosync/
[root@node1 corosync]# ll
total 20
-rw-r--r-- 1 root root 5384 Jul 28 2010 amf.conf.example
-rw-r--r-- 1 root root 436 Jul 28 2010 corosync.conf.example
drwxr-xr-x 2 root root 4096 Jul 28 2010 service.d
drwxr-xr-x 2 root root 4096 Jul 28 2010 uidgid.d
[root@node1 corosync]# cp corosync.conf.example corosync.conf
[root@node1 corosync]# vim corosync.conf
10 bindnetaddr: 192.168.101.0 #修改此行
#添加以下幾行
33 service {
34 ver: 0
35 name: pacemaker
36 use_mgmtd: yes
37 }
38 aisexec {
39 user: root
40 group: root
41 }
2:創建cluster目錄
[root@node1 corosync]# mkdir /var/log/cluster
3:爲了便面其他主機加入該集羣,需要認證,生成一authkey
[root@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
[root@node1 corosync]# ll
total 28
-rw-r--r-- 1 root root 5384 Jul 28 2010 amf.conf.example
-r-------- 1 root root 128 Feb 8 00:13 authkey
-rw-r--r-- 1 root root 561 Feb 8 00:12 corosync.conf
-rw-r--r-- 1 root root 436 Jul 28 2010 corosync.conf.example
drwxr-xr-x 2 root root 4096 Jul 28 2010 service.d
drwxr-xr-x 2 root root 4096 Jul 28 2010 uidgid.d
[root@node1 corosync]#
4:將node1節點上的文件拷貝到節點node2上面(記住要帶-p)
[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/
authkey 100% 128 0.1KB/s 00:00
corosync.conf 100% 561 0.6KB/s 00:00
[root@node1 corosync]# ssh node2 'mkdir -pv /var/log/cluster'
mkdir: created directory `/var/log/cluster'
[root@node1 corosync]#
5:在node1和node2節點上面啓動 corosync 的服務
[root@node1 corosync]# service corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
6:驗證corosync引擎是否正常啓動了
[root@node1 ~]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages
Feb 8 01:12:48 node1 corosync[5198]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
Feb 8 01:12:49 node1 corosync[5655]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Feb 8 01:12:49 node1 corosync[5655]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
7:查看初始化成員節點通知是否發出
[root@node1 ~]# grep -i totem /var/log/messages
Feb 8 01:12:49 node1 corosync[5655]: [TOTEM ] Initializing transport (UDP/IP).
Feb 8 01:12:49 node1 corosync[5655]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Feb 8 01:12:49 node1 corosync[5655]: [TOTEM ] The network interface [192.168.101.81] is now up.
Feb 8 01:12:49 node1 corosync[5655]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
8:檢查過程中是否有錯誤產生
[root@node1 drbd.d]# grep -i error: /var/log/messages |grep -v unpack_resources
此處出現如下2個錯誤:

 

 

 

 

出現錯誤待續……

說明:本文不再維護

請參考:

 

 

案例應用:紅帽企業羣集和存儲管理之

mysql服務器的HA集羣之corosync+drbd+pacemaker實現

 

上篇地址:

http://xjzhujunjie.blog.51cto.com/3582724/886317

 

下篇地址: 

http://xjzhujunjie.blog.51cto.com/3582724/886323

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章