corosync+drbd+mysql(實現mysql服務器的高可用性羣集)

drbd+corosync+pacemaker

實現mysql服務器的高可用性羣集

實驗環境:

node1 node1.a.com 192.168.10.10

node2node2.a.com 192.168.10.20

vip:192.168.50.100

實驗步驟:

修改羣集中各節點的網絡參數

node1

[root@node1 ~]# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2.a.com

[root@node1 ~]# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.10.10 node1.a.com node1

192.168.10.20 node2.a.com node2

[root@node1 ~]# hostname

node1.a.com

同步羣集中各節點的時間:

[root@node1 ~]# hwclock -s

node2

[root@node2 ~]# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2.a.com

[root@node2 ~]# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.10.10 node1.a.com node1

192.168.10.20 node2.a.com node2

[root@node2 ~]# hostname

node2.a.com

[root@node2 ~]# hwclock -s

在各個節點上面產生密鑰實現無密碼的通訊

node1

產生一個rsa的非對稱加密的私鑰對:

[root@node1 ~]# ssh-keygen -t rsa

拷貝到node2節點:

[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2

15

The authenticity of host 'node2 (192.168.10.20)' can't be established.

RSA key fingerprint is f8:b2:a1:a2:27:51:b2:86:c9:d6:c9:74:a4:e3:e5:93.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,192.168.10.20' (RSA) to the list of known hosts.

root@node2's password: 123456

Now try logging into the machine, with "ssh 'root@node2'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

node2

[root@node2 ~]# ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

ea:25:f1:47:37:28:26:6a:75:57:1c:46:81:a8:41:6b [email protected]

[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub node1

15

The authenticity of host 'node1 (192.168.10.10)' can't be established.

RSA key fingerprint is f8:b2:a1:a2:27:51:b2:86:c9:d6:c9:74:a4:e3:e5:93.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts.

root@node1's password: 123456

Now try logging into the machine, with "ssh 'node1'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

在各個節點上面配置好yum客戶端

node1

[root@node1 ~]# vim /etc/yum.repos.d/rhel-debuginfo.repo

[rhel-server]

name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[rhel-cluster]

name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[rhel-clusterstorage]

name=Red Hat Enterprise Linux clusterstorage

baseurl=file:///mnt/cdrom/ClusterStorage

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[root@node1 ~]# scp /etc/yum.repos.d/rhel-debuginfo.repo node2:/etc/yum.repos.d/

rhel-debuginfo.repo 100% 319 0.3KB/s 00:00

將下載好的rpm包上傳到linux上的各個節點

[root@node1 ~]# ll

total 162332

drwxr-xr-x 2 root root 4096 Mar 19 23:34 Desktop

-rw------- 1 root root 1330 Feb 9 00:27 anaconda-ks.cfg

-rw-r--r-- 1 root root 271360 May 13 16:41 cluster-glue-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 133254 May 13 16:41 cluster-glue-libs-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 170052 May 13 16:41 corosync-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 158502 May 13 16:41 corosynclib-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 221868 May 13 16:41 drbd83-8.3.8-1.el5.centos.i386.rpm

-rw-r--r-- 1 root root 165591 May 13 16:41 heartbeat-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root 289600 May 13 16:41 heartbeat-libs-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root 35236 Feb 9 00:27 install.log

-rw-r--r-- 1 root root 3995 Feb 9 00:26 install.log.syslog

-rw-r--r-- 1 root root 125974 May 13 16:41 kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

-rw-r--r-- 1 root root 60458 May 13 16:41 libesmtp-1.0.4-5.el5.i386.rpm

-rw-r--r-- 1 root root 162247449 May 13 16:41 mysql-5.5.15-linux2.6-i686.tar.gz

-rw-r--r-- 1 root root 207085 May 13 16:41 openais-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 94614 May 13 16:41 openaislib-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 796813 May 13 16:41 pacemaker-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 207925 May 13 16:41 pacemaker-cts-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 332026 May 13 16:41 pacemaker-libs-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 32818 May 13 16:41 perl-TimeDate-1.16-5.el5.noarch.rpm

-rw-r--r-- 1 root root 388632 May 13 16:41 resource-agents-1.0.4-1.1.el5.i386.rpm

[root@node1 ~]# yum localinstall *.rpm -y --nogpgcheck

[root@node1 ~]# scp *.rpm node2:/root

[root@node2 ~]# yum localinstall *.rpm -y --nogpgcheck

在各節點上增加一個大小類型都相關的drbd設備(sdb1)

node1

需要添加一塊小硬盤:

[root@node1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

The number of cylinders for this disk is set to 1044.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1044, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044): +1G

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@node1 ~]# partprobe /dev/sdb

[root@node1 ~]# cat /proc/partitions

major minor #blocks name

8 0 20971520 sda

8 1 104391 sda1

8 2 2096482 sda2

8 3 10482412 sda3

8 16 8388608 sdb

8 17 987966 sdb1

node2與node1一樣

配置drbd

node1

複製樣例配置文件爲即將使用的配置文件:

[root@node1 ~]# cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc/

[root@node1 ~]# cd /etc/drbd.d/

將文件global_common.conf 備份:

[root@node1 drbd.d]# cp global_common.conf global_common.conf.bak

[root@node1 drbd.d]# vim global_common.conf

global {

usage-count no;

}

common {

protocol C;

handlers {

}

startup {

wfc-timeout 120;

degr-wfc-timeout 120;

}

disk {

on-io-error detach;

}

net {

cram-hmac-alg "sha1";

shared-secret "mydrbdlab";

}

syncer {

rate 100M;

}

}

定義mysql的資源:

[root@node1 drbd.d]# vim mysql.res

resource mysql {

on node1.a.com {

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.10.10:7789;

meta-disk internal;

}

on node2.a.com {

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.10.20:7789;

meta-disk internal;

}

}

將以上的drbd.*文件都拷貝到node2上面:

[root@node1 drbd.d]# scp -r /etc/drbd.* node2:/etc/

drbd.conf 100% 133 0.1KB/s 00:00

global_common.conf.bak 100% 1418 1.4KB/s 00:00

global_common.conf 100% 1548 1.5KB/s 00:00

mysql.res 100% 241 0.2KB/s 00:00

node1初始化定義的mysql的資源並啓動相應的服務:

[root@node1 drbd.d]# drbdadm create-md mysql

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node1 drbd.d]# service drbd start

Starting DRBD resources: [ n(mysql) ]..........

***************************************************************

DRBD's startup script waits for the peer node(s) to appear.

- In case this node was already a degraded cluster before the

reboot the timeout is 120 seconds. [degr-wfc-timeout]

- If the peer was available before the reboot the timeout will

expire after 120 seconds. [wfc-timeout]

(These values are for resource 'mysql'; 0 sec -> wait forever)

To abort waiting enter 'yes' [ 25]:

使用drbd-overview命令來查看啓動狀態:

[root@node1 drbd.d]# drbd-overview

0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----

node2

[root@node2 ~]# drbdadm create-md mysql

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node2 ~]# service drbd start

[root@node2 drbd.d]# drbd-overview

0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----

從上面的信息中可以看出此時兩個節點均處於Secondary狀態。於是,我們接下來需要將其中一個節點設置爲Primary,這裏將node1設置爲主節點

[root@node1 drbd.d]# drbdadm -- --overwrite-data-of-peer primary mysql

[root@node1 drbd.d]# drbd-overview

0:mysql SyncSource Primary/Secondary UpToDate/Inconsistent C r----

[=======>............] sync'ed: 40.5% (592824/987896)K delay_probe: 24

[root@node2 ~]# drbd-overview

0:mysql Connected Secondary/Primary UpToDate/UpToDate C r----

[root@node2 ~]# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16

0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----

ns:0 nr:987896 dw:987896 dr:0 al:0 bm:61 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

查看同步過程:

[root@node1 drbd.d]# watch -n 1 'cat /proc/drbd' (Ctrl+c退出)

創建文件系統(只可以在primary節點上進行):

[root@node1 drbd.d]# mkfs -t ext3 /dev/drbd0 格式化

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

123648 inodes, 246974 blocks

12348 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

15456 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): mkdir done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@node1 drbd.d]# mkdir /mysqldata

[root@node1 drbd.d]# mount /dev/drbd0 /mysqldata/

[root@node1 drbd.d]# cd /mysqldata/

[root@node1 mysqldata]# touch f1 f2

[root@node1 mysqldata]# ll /mysqldata/

total 16

-rw-r--r-- 1 root root 0 May 13 18:24 f1

-rw-r--r-- 1 root root 0 May 13 18:24 f2

drwx------ 2 root root 16384 May 13 18:23 lost+found

[root@node1 mysqldata]# cd

[root@node1 ~]# umount /mysqldata/

將node1設置爲secondary節點:

[root@node1 ~]# drbdadm secondary mysql

[root@node1 ~]# drbd-overview

0:mysql Connected Secondary/Secondary UpToDate/UpToDate C r----

將node2設置爲primary節點:

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# drbd-overview

0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node2 ~]# mkdir /mysqldata

[root@node2 ~]# mount /dev/drbd0 /mysqldata/

[root@node2 ~]# ll /mysqldata/

total 16

-rw-r--r-- 1 root root 0 May 13 18:24 f1

-rw-r--r-- 1 root root 0 May 13 18:24 f2

drwx------ 2 root root 16384 May 13 18:23 lost+found

[root@node2 ~]# umount /mysqldata/

mysql的安裝和配置

添加用戶和組:

[root@node1 ~]# groupadd -r mysql

You have new mail in /var/spool/mail/root

[root@node1 ~]# useradd -g mysql -r mysql

由於主設備才能讀寫,掛載,故我們還要設置node1爲主設備,node2爲從設備:[root@node2 ~]# drbdadm secondary mysql

You have new mail in /var/spool/mail/root

node1

[root@node1 ~]# drbdadm primary mysql

[root@node1 ~]# drbd-overview

0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----

掛載drbd設備:

[root@node1 ~]# mount /dev/drbd0 /mysqldata/

[root@node1 ~]# mkdir /mysqldata/data

data目錄要用存放mysql的數據,故改變其屬主屬組:

[root@node1 ~]# chown -R mysql.mysql /mysqldata/data/

查看:

[root@node1 ~]# ll /mysqldata/

total 20

drwxr-xr-x 2 mysql mysql 4096 May 13 18:52 data

-rw-r--r--1 root root 0 May 13 18:35 f1

-rw-r--r--1 root root 0 May 13 18:35 f2

drwx------2 root root 16384 May 13 18:34 lost+found

安裝mysql

[root@node1 ~]# tarzxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node1 ~]# cd /usr/local/

[root@node1 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'

[root@node1 local]# cd mysql

[root@node1 mysql]# chown -R mysql.mysql .

初始化mysql數據庫:

[root@node1 mysql]# scripts/mysql_install_db --user=mysql --datadir=/mysqldata/data

[root@node1 mysql]# chown -R root .

爲mysql提供主配置文件:

[root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf

並修改此文件中thread_concurrency的值爲你的CPU個數乘以2,比如這裏使用如下行:

[root@node1 mysql]# vim /etc/my.cnf

thread_concurrency = 2

另外還需要添加如下行指定mysql數據文件的存放位置

datadir = /mysqldata/data

爲mysql提供sysv服務腳本,使其能使用service命令:

[root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld

node2上的配置文件,sysv服務腳本和此相同,故直接複製過去:

[root@node1 mysql]# scp /etc/my.cnf node2:/etc/

my.cnf 100% 4691 4.6KB/s 00:00

[root@node1 mysql]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/

mysqld 100% 10KB 10.4KB/s 00:00

添加至服務列表:

[root@node1 mysql]# chkconfig --add mysqld

確保開機不能自動啓動,我們要用CRM控制:

[root@node1 mysql]# chkconfig mysqld off

而後就可以啓動服務測試使用了:

[root@node1 mysql]# service mysqld start

Starting MySQL...... [ OK ]

測試之後關閉服務:

[root@node1 mysql]# service mysqld stop

Shutting down MySQL.. [ OK ]

[root@node1 mysql]# ls /mysqldata/data/

ib_logfile0 ib_logfile1 ibdata1 mysql mysql-bin.000001 mysql-bin.index node1.a.com.err performance_schema test

爲了使用mysql的安裝符合系統使用規範,並將其開發組件導出給系統使用,這裏還需要進行如下步驟:

輸出mysql的man手冊至man命令的查找路徑

[root@node1 mysql]# vim /etc/man.config

添加如下行即可

MANPATH /usr/local/mysql/man

輸出mysql的頭文件至系統頭文件路徑/usr/include,,這可以通過簡單的創建鏈接實現:

[root@node1 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql

create symbolic link `/usr/include/mysql' to `/usr/local/mysql/include'

輸出mysql的庫文件給系統庫查找路徑:(文件只要是在/etc/ld.so.conf.d/下並且後綴是.conf就可以):

[root@node1 mysql]# echo '/usr/local/mysql/lib/'> /etc/ld.so.conf.d/mysql.conf

讓系統重新載入系統庫:

[root@node1 mysql]# ldconfig

修改PATH環境變量,讓系統所有用戶可以直接使用mysql的相關命令:

PATH=$PATH:/usr/local/mysql/bin

重新讀取環境變量:

[root@node1 mysql]# vim /etc/profile

[root@node1 mysql]# . /etc/profile

[root@node1 mysql]# echo $PATH

/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin

[root@node1 mysql]# umount /mysqldata/

node2

[root@node2 ~]# groupadd -r mysql

[root@node2 ~]# useradd -g mysql -r mysql

由於主設備才能讀寫,掛載,故我們還要設置node2爲主設備,node1爲從設備:

[root@node1 mysql]# drbdadm secondary mysql

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# mount /dev/drbd0 /mysqldata/

[root@node2 ~]# ls /mysqldata/

data f1 f2 lost+found

[root@node2 ~]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node2 ~]# cd /usr/local/

[root@node2 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'

[root@node2 local]# cd mysql

[root@node2 mysql]# chown -R root:mysql .

[root@node2 mysql]# cp support-files/my-large.cnf /etc/my.cnf

cp: overwrite `/etc/my.cnf'? n

[root@node2 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld

cp: overwrite `/etc/rc.d/init.d/mysqld'? n

[root@node2 mysql]# chkconfig --add mysqld

[root@node2 mysql]# chkconfig mysqld off

[root@node2 mysql]# service mysqld start

Starting MySQL...... [ OK ]

[root@node2 mysql]# ls /mysqldata/data/

ib_logfile0 ibdata1 mysql-bin.000001 mysql-bin.index node2.a.com.err performance_schema

ib_logfile1 mysql mysql-bin.000002 node1.a.com.err node2.a.com.pid test

[root@node2 mysql]# service mysqld stop

Shutting down MySQL.. [ OK ]

爲了使用mysql的安裝符合系統使用規範,並將其開發組件導出給系統使用,

這裏還需要進行一些類似node1上的操作,由於方法完全相同,不再闡述!

卸載設備:

[root@node2 mysql]# umount /dev/drbd0

corosync+pacemaker的安裝和配置

node1

[root@node1 ~]# cd /etc/corosync/

[root@node1 corosync]# cp corosync.conf.example corosync.conf

[root@node1 corosync]# vim corosync.conf

compatibility: whitetank

totem { //這是用來傳遞心跳時的相關協議的信息

version: 2

secauth: off

threads: 0

interface {

ringnumber: 0

bindnetaddr: 192.168.2.0 //我們只改動這裏就行啦

mcastaddr: 226.94.1.1

mcastport: 5405

}

}

logging {

fileline: off

to_stderr: no //是否發送標準出錯

to_logfile: yes //日誌

to_syslog: yes //系統日誌(建議關掉一個),會降低性能

logfile: /var/log/cluster/corosync.log //需要手動創建目錄cluster

debug: off // 排除時可以起來

timestamp: on //日誌中是否記錄時間

//******以下是openais的東西,可以不用代開*****//

logger_subsys {

subsys: AMF

debug: off

}

}

amf {

mode: disabled

}

//*********補充一些東西,前面只是底層的東西,因爲要用pacemaker ******//

service {

ver: 0

name: pacemaker

use_mgmtd: yes

}

//******雖然用不到openais ,但是會用到一些子選項********//

aisexec {

user: root

group: root

}

創建cluster目錄:

[root@node1 corosync]# mkdir /var/log/cluster

爲了便面其他主機加入該集羣,需要認證,生成一authkey:

[root@node1 corosync]# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 888).

Press keys on your keyboard to generate entropy (bits = 952).

Press keys on your keyboard to generate entropy (bits = 1016).

Writing corosync key to /etc/corosync/authkey.

[root@node1 corosync]# ll

total 28

-rw-r--r-- 1 root root 5384 Jul 28 2010 amf.conf.example

-r-------- 1 root root 128 May 13 20:50 authkey

-rw-r--r-- 1 root root 556 May 13 20:49 corosync.conf

-rw-r--r-- 1 root root 436 Jul 28 2010 corosync.conf.example

drwxr-xr-x 2 root root 4096 Jul 28 2010 service.d

drwxr-xr-x 2 root root 4096 Jul 28 2010 uidgid.d

將node1節點上的文件拷貝到節點node2上面(記住要帶-p)

[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

authkey 100% 128 0.1KB/s 00:00

corosync.conf 100% 556 0.5KB/s 00:00

[root@node1 corosync]# ssh node2 'mkdir /var/log/cluster'

在node1和node2節點上面啓動corosync 的服務:

[root@node1 corosync]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

[root@node2 ~]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

驗證corosync引擎是否正常啓動了

[root@node1 corosync]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Feb 8 16:33:15 localhost smartd[2928]: Opened configuration file /etc/smartd.conf

Feb 8 16:33:15 localhost smartd[2928]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 16:45:09 localhost smartd[3395]: Opened configuration file /etc/smartd.conf

Feb 8 16:45:09 localhost smartd[3395]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 17:13:11 localhost smartd[3386]: Opened configuration file /etc/smartd.conf

Feb 8 17:13:11 localhost smartd[3386]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Mar 19 23:33:34 localhost smartd[3385]: Opened configuration file /etc/smartd.conf

Mar 19 23:33:34 localhost smartd[3385]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 09:12:46 localhost smartd[3377]: Opened configuration file /etc/smartd.conf

May 6 09:12:46 localhost smartd[3377]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 15:37:51 localhost smartd[3408]: Opened configuration file /etc/smartd.conf

May 6 15:37:51 localhost smartd[3408]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:31:06 localhost smartd[3416]: Opened configuration file /etc/smartd.conf

May 8 09:31:06 localhost smartd[3416]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:47:42 node1 smartd[3353]: Opened configuration file /etc/smartd.conf

May 8 09:47:42 node1 smartd[3353]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 12:10:18 node1 corosync[7838]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

May 8 12:10:18 node1 corosync[7838]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

查看初始化成員節點通知是否發出

[root@node1 corosync]# grep -i totem /var/log/messages

May 8 12:10:18 node1 corosync[7838]: [TOTEM ] Initializing transport (UDP/IP).

May 8 12:10:18 node1 corosync[7838]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

May 8 12:10:19 node1 corosync[7838]: [TOTEM ] The network interface [192.168.10.10] is now up.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] Process pause detected for 899 ms, flushing membership messages.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:10:32 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:10:32 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:12:03 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:12:03 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:12:15 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

檢查過程中是否有錯誤產生

[root@node1 corosync]# grep -i error: /var/log/messages |grep -v unpack_resources

Feb 8 17:03:46 localhost kernel: sd 1:0:0:0: SCSI error: return code = 0x00010000

檢查pacemaker時候已經啓動了

[root@node1 corosync]# grep -i pcmk_startup /var/log/messages

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: CRM: Initialized

May 8 12:10:23 node1 corosync[7838]: [pcmk ] Logging: Initialized pcmk_startup

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Service: 9

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Local hostname: node1.a.com

node2

查看初始化成員節點通知是否發出:

[root@node2 ~]# grep -i totem /var/log/messages

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] Initializing transport (UDP/IP).

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] The network interface [192.168.10.20] is now up.

May 8 12:12:18 node2 corosync[7817]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

查看corosync的狀態:

[root@node2 ~]# service corosync status

corosync (pid 7817) is running...

驗證corosync引擎是否正常啓動了:

[root@node2 ~]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Feb 8 16:33:15 localhost smartd[2928]: Opened configuration file /etc/smartd.conf

Feb 8 16:33:15 localhost smartd[2928]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 16:45:09 localhost smartd[3395]: Opened configuration file /etc/smartd.conf

Feb 8 16:45:09 localhost smartd[3395]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 17:13:11 localhost smartd[3386]: Opened configuration file /etc/smartd.conf

Feb 8 17:13:11 localhost smartd[3386]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Mar 19 23:33:34 localhost smartd[3385]: Opened configuration file /etc/smartd.conf

Mar 19 23:33:34 localhost smartd[3385]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 09:18:50 localhost smartd[3379]: Opened configuration file /etc/smartd.conf

May 6 09:18:50 localhost smartd[3379]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:50:25 node2 smartd[3357]: Opened configuration file /etc/smartd.conf

May 8 09:50:26 node2 smartd[3357]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 12:12:16 node2 corosync[7817]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

May 8 12:12:16 node2 corosync[7817]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

檢查過程中是否有錯誤產生:

[root@node2 ~]# grep -i error: /var/log/messages |grep -v unpack_resources

Feb 8 17:03:46 localhost kernel: sd 1:0:0:0: SCSI error: return code = 0x00010000

檢查pacemaker時候已經啓動了:

[root@node2 ~]# grep -i pcmk_startup /var/log/messages

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: CRM: Initialized

May 8 12:12:17 node2 corosync[7817]: [pcmk ] Logging: Initialized pcmk_startup

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Service: 9

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Local hostname: node2.a.com

node1

在任何一個節點上 查看集羣的成員狀態:

[root@node1 corosync]# crm status

============

Last updated: Tue May 8 13:08:41 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

0 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

配置羣集的工作屬性

corosync默認啓用了stonith,而當前集羣並沒有相應的stonith設備,因此此默認配置目前尚不可用,這可以通過如下命令先禁用stonith:

[root@node1 corosync]# crm configure property stonith-enabled=false

對於雙節點的集羣來說,我們要配置此選項來忽略quorum,即這時候票數不起作用,一個節點也能正常運行:

[root@node1 corosync]# crm configure property no-quorum-policy=ignore

定義資源的粘性值,使資源不能再節點之間隨意的切換,因爲這樣是非常浪費系統的資源的。

資源黏性值範圍及其作用:

0:這是默認選項。資源放置在系統中的最適合位置。這意味着當負載能力“較好”或較差的節點變得可用時才轉移資源。此選項的作用基本等同於自動故障回覆,只是資源可能會轉移到非之前活動的節點上;

大於0:資源更願意留在當前位置,但是如果有更合適的節點可用時會移動。值越高表示資源越願意留在當前位置;

小於0:資源更願意移離當前位置。絕對值越高表示資源越願意離開當前位置;

INFINITY:如果不是因節點不適合運行資源(節點關機、節點待機、達到migration-threshold 或配置更改)而強制資源轉移,資源總是留在當前位置。此選項的作用幾乎等同於完全禁用自動故障回覆;

-INFINITY:資源總是移離當前位置;

我們這裏可以通過以下方式爲資源指定默認黏性值:

[root@node1 corosync]# crm configure rsc_defaults resource-stickiness=100

定義集羣服務及資源(node1)

查看當前集羣的配置信息,確保已經配置全局屬性參數爲兩節點集羣所適用

[root@node1 corosync]# crm configure show

node node1.a.com

node node2.a.com

primitive webip ocf:heartbeat:IPaddr \

params ip="192.168.10.100"

primitive webserver lsb:httpd

group web webip webserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100"

將已經配置好的DRBD設備/dev/drbd0定義爲集羣服務;

[root@node1 corosync]# service drbd stop

Stopping all DRBD resources: .

[root@node1 corosync]# chkconfig drbd off

[root@node1 corosync]# drbd-overview

drbd not loaded

[root@node1 corosync]# ssh node2 "service drbd stop"

Stopping all DRBD resources: .

[root@node1 corosync]# ssh node2 "chkconfig drbd off"

[root@node1 corosync]# drbd-overview

drbd not loaded

配置drbd爲集羣資源:

提供drbd的RA目前由OCF歸類爲linbit,其路徑爲/usr/lib/ocf/resource.d/linbit/drbd。我們可以使用如下命令來查看此RA及RA的meta信息:

[root@node1 corosync]# crm ra classes

heartbeat

lsb

ocf / heartbeat linbit pacemaker

stonith

[root@node1 corosync]# crm ra list ocf linbit

drbd

查看drbd的資源代理的相關信息:

[root@node1 corosync]# crm ra info ocf:linbit:drbd

This resource agent manages a DRBD resource

as a master/slave resource. DRBD is a shared-nothing replicated storage

device. (ocf:linbit:drbd)

Master/Slave OCF Resource Agent for DRBD

Parameters (* denotes required, [] the default):

drbd_resource* (string): drbd resource name

The name of the drbd resource from the drbd.conf file.

drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf

Full path to the drbd.conf file.

Operations' defaults (advisory minimum):

start timeout=240

promote timeout=90

demote timeout=90

notify timeout=90

stop timeout=100

monitor_Slave interval=20 timeout=20 start-delay=1m

monitor_Master interval=10 timeout=20 start-delay=1m

drbd需要同時運行在兩個節點上,但只能有一個節點(primary/secondary模型)是Master,而另一個節點爲Slave;因此,它是一種比較特殊的集羣資源,其資源類型爲多狀態(Multi-state)clone類型,

即主機節點有Master和Slave之分,且要求服務剛啓動時兩個節點都處於slave狀態。

[root@node1 corosync]# crm

crm(live)# configure

crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" op monitor role="Master" interval="30s" op monitor role="Slave" interval="31s" op start timeout="240s" op stop timeout="100s"

crm(live)configure# ms MS_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="true"

crm(live)configure# show mysqldrbd

primitive mysqldrbd ocf:heartbeat:drbd \

params drbd_resource="mysql" \

op monitor interval="30s" role="Master" \

op monitor interval="31s" role="Slave" \

op start interval="0" timeout="240s" \

op stop interval="0" timeout="100s"

crm(live)configure# show MS_mysqldrbd

ms MS_mysqldrbd mysqldrbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# exit

bye

[root@node1 corosync]# crm status

============

Last updated: Sun May 13 21:41:57 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

2 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Resource Group: web

webip (ocf::heartbeat:IPaddr): Started node1.a.com

webserver (lsb:httpd): Started node1.a.com

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Slaves: [ node2.a.com node1.a.com ]

[root@node1 corosync]# drbdadm role mysql

Secondary/Secondary

我們實現將drbd設置自動掛載至/mysqldata目錄。此外,此自動掛載的集羣資源需要運行於drbd服務的Master節點上,

並且只能在drbd服務將某節點設置爲Primary以後方可啓動。

[root@node1 corosync]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

[root@node2 corosync]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

[root@node1 corosync]# crm

crm(live)# configure

crm(live)configure# primitive MysqlFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mysqldata" fstype="ext3" op start timeout=60s op stop timeout=60s

crm(live)configure# commit

crm(live)configure# exit

bye

開始測試

mysql資源的定義(node1上操作)

先爲mysql集羣創建一個IP地址資源,通過集羣提供服務時使用,這個地址就是客戶端訪問mysql服務器使用的ip地址:

[root@node1 corosync]# crm configure primitive myip ocf:heartbeat:IPaddr params ip=192.168.50.100

配置mysqld服務爲高可用資源

[root@node1 corosync]# crm configure primitive mysqlserver lsb:mysqld

[root@node1 corosync]# crm status

[root@node1 ~]# crm status

============

Last updated: Sat May 12 15:40:57 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node1.a.com ]

Slaves: [ node2.a.com ]

MysqlFS(ocf::heartbeat:Filesystem):Started node1.a.com

myip(ocf::heartbeat:IPaddr):Started node2.a.com

配置資源的各種約束:

集羣擁有所有必需資源,但它可能還無法進行正確處理。資源約束則用以指定在哪些羣集節點上運行資源,以何種順序裝載資源,以及特定資源依賴於哪些其它資源。pacemaker共給我們提供了三種資源約束方法:

1)Resource Location(資源位置):定義資源可以、不可以或儘可能在哪些節點上運行

2)Resource Collocation(資源排列):排列約束用以定義集羣資源可以或不可以在某個節點上同時運行

3)Resource Order(資源順序):順序約束定義集羣資源在節點上啓動的順序

定義約束時,還需要指定分數。各種分數是集羣工作方式的重要組成部分。其實,從遷移資源到決定在已降級集羣中停止哪些資源的整個過程是通過以某種方式修改分數來實現的。分數按每個資源來計算,資源分數爲負的任何節點都無法運行該資源。在計算出資源分數後,

集羣選擇分數最高的節點。INFINITY(無窮大)目前定義爲1,000,000。加減無窮大遵循以下3個基本規則:

1)任何值+ 無窮大= 無窮大

2)任何值-無窮大= -無窮大

3)無窮大-無窮大= -無窮大

定義資源約束時,也可以指定每個約束的分數。分數表示指派給此資源約束的值。分數較高的約束先應用,分數較低的約束後應用。通過使用不同的分數爲既定資源創建更多位置約束,可以指定資源要故障轉移至的目標節點的順序。

我們要定義如下的約束:

[root@node1 ~]# crm

crm(live)# configure

crm(live)configure# colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

crm(live)configure# order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

crm(live)configure# order myip_after_MysqlFS mandatory: MysqlFS myip

crm(live)configure# order mysqlserver_after_myip mandatory: myip mysqlserver

驗證是否有錯

crm(live)configure# verify

提交:

crm(live)configure# commit

crm(live)configure# exit

bye

查看配置信息:

[root@node1 ~]# crm configure show

node node1.a.com

node node2.a.com

primitive MysqlFS ocf:heartbeat:Filesystem \

params device="/dev/drbd0" directory="/mysqldata" fstype="ext3" \

op start interval="0" timeout="60s" \

op stop interval="0" timeout="60s"

primitive myip ocf:heartbeat:IPaddr \

params ip="192.168.50.100"

primitive mysqldrbd ocf:heartbeat:drbd \

params drbd_resource="mysql" \

op monitor interval="30s" role="Master" \

op monitor interval="31s" role="Slave" \

op start interval="0" timeout="240s" \

op stop interval="0" timeout="100s"

primitive mysqlserver lsb:mysqld

ms MS_mysqldrbd mysqldrbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

order myip_after_MysqlFS inf: MysqlFS myip

order mysqlserver_after_myip inf: myip mysqlserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100

查看運行狀態:

[root@node1 ~]# crm status

============

Last updated: Sat May 13 22:49:26 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node1.a.com ]

Slaves: [ node2.a.com ]

MysqlFS(ocf::heartbeat:Filesystem):Started node1.a.com

myip(ocf::heartbeat:IPaddr):Started node1.a.com

mysqlserver(lsb:mysqld):Started node1.a.com

可見,服務現在在node1上正常運行:

在node1上的操作,查看mysql的運行狀態:

[root@node1 ~]# service mysqld status

MySQL running (6578) [ OK ]

查看是否自動掛載

[root@node1 ~]# mount

/dev/sda3 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

none on /proc/fs/vmblock/mountPoint type vmblock (rw)

/dev/hdc on /media/RHEL_5.4 i386 DVD type iso9660 (ro,noexec,nosuid,nodev,uid=0)

查看目錄

[root@node1 ~]# ls /mysqldata/

data f1 f2 lost+found

查看vip的狀態

[root@node1 ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:0C:29:B2:82:C4

inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:feb2:82c4/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:122954 errors:0 dropped:0 overruns:0 frame:0

TX packets:826224 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:12821186 (12.2 MiB) TX bytes:1150412694 (1.0 GiB)

Interrupt:67 Base address:0x2024

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:B2:82:C4

inet addr:192.168.50.100 Bcast:192.168.50.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:67 Base address:0x2024

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:7250 errors:0 dropped:0 overruns:0 frame:0

TX packets:7250 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:2936563 (2.8 MiB) TX bytes:2936563 (2.8 MiB)

繼續測試:

在node1上操作,讓node1下線:

[root@node1 ~]# crm node standby

查看集羣運行的狀態:

[root@node1 ~]# crm status

============

Last updated: Sat May 12 15:56:11 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Node node1.a.com: standby

Online: [ node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node2.a.com ]

Stopped: [ mysqldrbd:0 ]

MysqlFS(ocf::heartbeat:Filesystem):Started node2.a.com

myip(ocf::heartbeat:IPaddr):Started node2.a.com

mysqlserver(lsb:mysqld):Started node2.a.com

可見我們的資源已經都切換到了node2上:

查看node2的運行狀態:

[root@node2 ~]# service mysqld status

MySQL running (7805) [ OK ]

查看目錄

[root@node2 ~]# ls /mysqldata/

data f1 f2 lost+found

ok,現在一切正常,我們可以驗證mysql服務是否能被正常訪問:

我們定義的是通過VIP:192.168.50.100來訪問mysql服務,現在node2上建立一個可以讓某個網段主機能訪問的賬戶(這個內容會同步drbd設備同步到node1上):

[root@node2 ~]# mysql

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.5.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all on *.* to test@'192.168.%.%' identified by '123456';

Query OK, 0 rows affected (0.01 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.06 sec)

mysql> exit

Bye

然後我們通過另一臺主機進行訪問:

[root@node1 ~]# mysql -u test -h 192.168.50.100

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 3

Server version: 5.5.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章