在企業6裏面的集羣叫做 RHCS = Red Hat Cluster Suite
它可以提供高可用,高可靠性,負載均衡和存儲(共享文件系統)的功能,核心功能爲高可用。
它是通過 lvs 的 IPVS 來實現負載均衡集羣的。
安裝企業6系統
這裏我們需要用到的是 rhel6 的系統,我們用網絡安裝:
登陸進來了:
[root@localhost ~]# vi /etc/sysconfig/network # 企業6中以這種方式進行解析
NETWORKING=yes
HOSTNAME=server1
[root@localhost ~]# vi /etc/hosts # 做本地解析
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.1 server1
172.25.254.2 server2
172.25.254.3 server3
172.25.254.4 server4
172.25.254.5 server5
172.25.254.6 server6
[root@localhost rules.d]# cd /etc/udev/rules.d
[root@localhost rules.d]# ls
60-raw.rules 70-persistent-net.rules
[root@localhost rules.d]# cat 70-persistent-net.rules
# PCI device 0x1af4:0x1000 (virtio-pci)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="52:54:00:f0:35:52", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# 這裏綁定的是MAC地址,由於我們要用這個虛擬機做快照,所以要刪除這個文件,
# 不然每個用快照創建的虛擬機就會使用這個mac地址,啓動網絡時就會報錯。
[root@localhost rules.d]# rm -fr 70-persistent-net.rules
# 配置yum源
[root@localhost rules.d]# vi /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.254.67/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[root@localhost rules.d]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 # 配置ip
DEVICE="eth0"
BOOTPROTO="static"
IPADDR="172.25.254.1"
NETMASK="255.255.255.0"
ONBOOT="yes"
[root@localhost rules.d]# /etc/init.d/network restart # 企業6中的重啓網絡是這樣的
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: Determining if ip address 172.25.254.1 is already in use for device eth0...
[ OK ]
[root@localhost rules.d]# ping 172.25.254.67
PING 172.25.254.67 (172.25.254.67) 56(84) bytes of data.
64 bytes from 172.25.254.67: icmp_seq=1 ttl=64 time=0.115 ms
[root@localhost rules.d]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel-source | 3.9 kB 00:00
rhel-source/primary_db | 3.1 MB 00:00
repo id repo name status
rhel-source Red Hat Enterprise Linux 6Server - x86_64 - Source 3,690
repolist: 3,690
# yum 源可用了
# 關閉selinux
[root@localhost ~]# vi /etc/sysconfig/selinux
SELINUX=disabled
[root@localhost ~]# /etc/init.d/iptables stop # 關閉火牆
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@localhost ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Unloading modules: [ OK ]
[root@localhost ~]# chkconfig iptables off # 關閉開機自起
[root@localhost ~]# chkconfig ip6tables off
# 安裝一些需要的軟件:
[root@localhost ~]# yum install -y lftp openssh-clients vim
[root@localhost ~]# poweroff
# 物理主機上創建快照
[root@rhel7host html]# cd /var/lib/libvirt/images/
[root@rhel7host images]# virt-sysprep -d base # 清除剛纔那個虛擬機的緩存,
[root@rhel7host images]# qemu-img create -f qcow2 -b base.qcow2 server1
[root@rhel7host images]# qemu-img create -f qcow2 -b base.qcow2 server2
[root@rhel7host images]# qemu-img create -f qcow2 -b base.qcow2 server3
[root@rhel7host images]# qemu-img create -f qcow2 -b base.qcow2 server4 # 創建4臺
然後用virt-manager 創建虛擬機就好了(使用上面創建的快照)
給安裝好的兩臺虛擬機配置高可用yum源:
[root@server2 ~]# vim /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.254.67/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.254.67/rhel6.5/HighAvailability
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.254.67/rhel6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.254.67/rhel6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.254.67/rhel6.5/ScalableFileSystem
gpgcheck=0
[root@server2 ~]# yum clean all
[root@server2 ~]# yum repolist # 生效
repo id repo name status
HighAvailability HighAvailability 56
LoadBalancer LoadBalancer 4
ResilientStorage ResilientStorage 62
ScalableFileSystem ScalableFileSystem 7
rhel-source Red Hat Enterprise Linux 6Server - x86_64 - Source 3,690
repolist: 3,819
[root@server2 ~]# /etc/init.d/iptables status
iptables: Firewall is not running. # 火牆關閉
[root@server2 ~]# getenforce
Disabled # selinux 關閉
配置簡單集羣
[root@server1 ~]# yum install -y ricci luci
# ricci 是每個高可用結點都必須安裝的, luci是管理結點,提供了一個圖形化的管理界面
[root@server1 ~]# id ricci
uid=140(ricci) gid=140(ricci) groups=140(ricci)
[root@server1 ~]# passwd ricci # 安裝後生成ricci用戶,給他設置一個密碼
[root@server2 ~]# passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@server1 ~]# /etc/init.d/ricci start
[root@server1 ~]# /etc/init.d/luci start # 開啓服務
[root@server1 ~]# chkconfig ricci on
[root@server1 ~]# chkconfig luci on # 開機自啓
[root@server2 ~]# /etc/init.d/ricci start
[root@server2 ~]# chkconfig ricci on
[root@server1 ~]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8084 0.0.0.0:* LISTEN 1398/python
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 894/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 970/master
tcp 0 0 :::22 :::* LISTEN 894/sshd
tcp 0 0 ::1:25 :::* LISTEN 970/master
tcp 0 0 :::11111 :::* LISTEN 1317/ricci
我們去8084端口進行訪問:https://172.25.254.1:8084
它需要一個臨時的證書。
然後就進入這個管理頁面了:
創建集羣
用root登陸:
點擊管理集羣,創建一個新的集羣:
我們勾選了加入集羣時重啓,這時我們的兩臺虛擬機已經斷開連接了。
Broadcast message from root@server1
(unknown) at 16:43 ...
The system is going down for reboot NOW!
Connection to 172.25.254.1 closed by remote host.
Connection to 172.25.254.1 closed.
[root@server2 yum.repos.d]#
Broadcast message from root@server2
(unknown) at 16:43 ...
The system is going down for reboot NOW!
Connection to 172.25.254.2 closed by remote host.
Connection to 172.25.254.2 closed.
現在我們這個集羣已經創建好了。
[root@server1 ~]# clustat
Cluster Status for westos_ha @ Fri May 29 16:45:31 2020
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online, Local
server2 2 Online
[root@server1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="westos_ha">
<clusternodes>
<clusternode name="server1" nodeid="1"/>
<clusternode name="server2" nodeid="2"/>
添加fence設備
去我們的物理機上配置fence (可參考企業7的集羣這篇文章)
重新生生成一個fence:
[root@rhel7host ~]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Listener module [multicast]:
Multicast IP Address [225.0.0.12]:
Multicast IP Port [1229]:
Interface [br0]:
Key File [/etc/cluster/fence_xvm.key]:
Backend module [libvirt]:
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@rhel7host cluster]# cat /etc/fence_virt.conf # 生成的配置文件
fence_virtd {
listener = "multicast";
backend = "libvirt";
module_path = "/usr/lib64/fence-virt";
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key"; # 指定了key文件位置
address = "225.0.0.12";
interface = "br0";
family = "ipv4";
port = "1229";
}
}
backends {
libvirt {
uri = "qemu:///system";
}
}
然後我們生成一個key文件
[root@rhel7host cluster]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000123125 s, 1.0 MB/s
[root@rhel7host cluster]# ll
total 4
-rw-r--r--. 1 root root 128 May 29 16:54 fence_xvm.key
## 發送key文件至server1 server2
[root@rhel7host cluster]# scp fence_xvm.key [email protected]:/etc/cluster/
[root@rhel7host cluster]# scp fence_xvm.key [email protected]:/etc/cluster/
然後我們去web界面給兩個結點添加 fence 。
點擊server1:
在下面的fence設備出添加
這裏的domain填寫的是server1的uuid號,可以去virt-manager 處查看。
點擊server2:
現在去物理機上啓動fence;
[root@rhel7host cluster]# systemctl start fence_virtd.service
[root@rhel7host cluster]# netstat -tnlpu |grep 1229
udp 0 0 0.0.0.0:1229 0.0.0.0:* 9322/fence_virtd
打開了1229端口
我們測試fence是否能成功。
[root@server1 cluster]# fence_node server2 #讓server2重啓
server2 重啓了。說明fence配置是正確的。
配置失敗回切域
就是比如server2這個結點掛掉了,我們要設置切換至其它結點的一個方式。
- prioritized – 對服務故障轉移到的節點進行排序
- restricted – 只能在指定的結點運行
- no failback – 就是當高優先級的結點恢復時,不切換回去
當前我們只有server1 和server2 兩個結點.數字越小優先級越高。
定義資源
我們可以定義VIP,或者服務等。
添加apache服務時,這裏注意應從腳本啓動。
在兩個結點安裝apache:
[root@server1 cluster]# yum install httpd -y
[root@server1 cluster]# echo www.server1.com > /var/www/html/index.html
[root@server2 cluster]# yum install httpd -y
[root@server2 cluster]# echo www.server2.com > /var/www/html/index.html
添加資源組
run exclusive 運行獨佔,就是隻能運行一個服務,不能同時運行兩個或以上服務。
給裏面添加我們剛纔創建的兩個資源:
點擊提交.
查看:
[root@server1 cluster]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:48:92:86 brd ff:ff:ff:ff:ff:ff
inet 172.25.254.1/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/24 scope global secondary eth0 # vip添加了
inet6 fe80::5054:ff:fe48:9286/64 scope link
valid_lft forever preferred_lft forever
[root@server1 cluster]# /etc/init.d/httpd status
httpd (pid 11448) is running... # 服務開啓了
[root@server1 cluster]# curl 172.25.254.100
www.server1.com
# 訪問到了server1上
現在我們讓server1腦裂一次:
等待五秒,就重啓了,我們剛纔設置了5s。
[root@server2 ~]# clustat
Cluster Status for westos_ha @ Fri May 29 17:36:11 2020
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online, rgmanager
server2 2 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:apache server2 started # 服務在server2上
[root@server2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:36:8e:ee brd ff:ff:ff:ff:ff:ff
inet 172.25.254.2/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/24 scope global secondary eth0 # 並且vip漂移過來了
帶存儲的集羣
我們再開啓一臺虛擬機來提供共享存儲。
[root@server3 ~]# yum install scsi-* -y
[root@server2 ~]# yum install -y iscsi-* # 安裝客戶端
[root@server1 ~]# yum install -y iscsi-*
我們給server3上添加一塊硬盤,共享出去,作爲共享磁盤
加上了.
[root@server3 ~]# vim /etc/tgt/targets.conf
<target iqn.2020-05.com.example:server.target1>
backing-store /dev/vda # 將這塊磁盤共享出去
</target>
[root@server3 ~]# /etc/init.d/tgtd start # 啓動tgtd服務
Starting SCSI target daemon: [ OK ]
[root@server3 ~]# tgt-admin -s # 查看共享磁盤的信息
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 8590 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vda
Backing store flags:
Account information:
ACL information:
ALL
[root@server3 ~]# ps ax
1051 ? S 0:00 [virtio-blk]
1067 ? Ssl 0:00 tgtd # 會有兩個這個進程
1070 ? S 0:00 tgtd
1101 pts/0 R+ 0:00 ps ax
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.254.3
172.25.254.3:3260,1 iqn.2020-05.com.example:server.target1
[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.254.3 # 發現這個磁盤
172.25.254.3:3260,1 iqn.2020-05.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l # 登陸
Logging in to [iface: default, target: iqn.2020-05.com.example:server.target1, portal: 172.25.254.3,3260] (multiple)
Login to [iface: default, target: iqn.2020-05.com.example:server.target1, portal: 172.25.254.3,3260] successful.
[root@server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2020-05.com.example:server.target1, portal: 172.25.254.3,3260] (multiple)
Login to [iface: default, target: iqn.2020-05.com.example:server.target1, portal: 172.25.254.3,3260] successful.
[root@server1 ~]# fdisk -l
Disk /dev/sdb: 8589 MB, 8589934592 bytes
## 然後我們就可以看到這塊硬盤了,把它當做本地的硬盤來使用。
我們只給它分一個區,方便使用:
[root@server1 ~]# fdisk -cu /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-16777215, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215):
Using default value 16777215
Command (m for help): p
Device Boot Start End Blocks Id System
/dev/sdb1 2048 16777215 8387584 83 Linux
給兩個結點安裝數據庫
server1:
[root@server1 ~]# yum install -y mysql-server
[root@server1 ~]# mkfs.ext4 /dev/sdb1 # 格式化共享磁盤
[root@server1 ~]# mount /dev/sdb
sdb sdb1
[root@server1 ~]# mount /dev/sdb1 /var/lib/mysql/
[root@server1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 8813300 1113240 7252368 14% /
tmpfs 510200 25656 484544 6% /dev/shm
/dev/sda1 495844 33442 436802 8% /boot
/dev/sdb1 8255928 149492 7687060 2% /var/lib/mysql
[root@server1 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 3 root root 4096 May 29 20:28 /var/lib/mysql/
[root@server1 ~]# chown mysql.mysql /var/lib/mysql/ # 使mysql對這個有寫的權限。
[root@server1 ~]# /etc/init.d/mysqld start # 啓動查看
[root@server1 ~]# cd /var/lib/mysql/
[root@server1 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 lost+found mysql mysql.sock test
[root@server1 ~]# /etc/init.d/mysqld stop # 關閉
Stopping mysqld: [ OK ]
[root@server1 ~]# umount /var/lib/mysql # 卸載
server2
[root@server2 ~]# yum install -y mysql-server
[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.254.3
[root@server1 ~]# iscsiadm -m node -l # 登錄進去
現在我們web界面進行配置:
disabled 掉apache,因爲我們剛設置了運行獨佔。
[root@server2 ~]# clustat
Cluster Status for westos_ha @ Fri May 29 20:57:09 2020
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online, rgmanager
server2 2 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:apache (server2) disabled
配置資源(掛載,服務和vip).
更改數據庫的默認回切域:
添加資源組:
並將上面的三個資源放到組中.
查看:
由於我們設置的server2的優先級高,所以再server2上查看。
[root@server2 ~]# /etc/init.d/mysqld status
mysqld (pid 28838) is running... # 服務啓動
[root@server2 ~]# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:36:8e:ee brd ff:ff:ff:ff:ff:ff
inet 172.25.254.2/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.200/24 scope global secondary eth0 # vip 添加
inet6 fe80::5054:ff:fe36:8eee/64 scope link
valid_lft forever preferred_lft forever
[root@server2 ~]# df
/dev/sdb1 8255928 170960 7665592 3% /var/lib/mysql # 自動掛載
我們還可以激活剛纔的apache組:
[root@server2 ~]# clusvcadm -e apache # 激活 -e enable
Local machine trying to enable service:apache...clust Success
service:apache is now running on server2
[root@server2 ~]# clustat
Cluster Status for westos_ha @ Fri May 29 21:17:50 2020
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server1 1 Online, rgmanager
server2 2 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:apache server2 started
service:sql server2 started
[root@server2 ~]# curl localhost
www.server2.com
恢復了。
測試腦裂
當前sql資源組所有的資源都再server2上。
[root@server2 mysql]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:36:8e:ee brd ff:ff:ff:ff:ff:ff
inet 172.25.254.2/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/24 scope global secondary eth0
inet 172.25.254.200/24 scope global secondary eth0
inet6 fe80::5054:ff:fe36:8eee/64 scope link
valid_lft forever preferred_lft forever
重啓後:
[root@server1 mysql]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:48:92:86 brd ff:ff:ff:ff:ff:ff
inet 172.25.254.1/24 brd 172.25.254.255 scope global eth0
inet 172.25.254.100/24 scope global secondary eth0
inet 172.25.254.200/24 scope global secondary eth0
inet6 fe80::5054:ff:fe48:9286/64 scope link
valid_lft forever preferred_lft forever
資源就都阿到server1了。
目前我們使用的後端存儲iscsi時單點寫入的,
比如說我們再server1上的設備掛載點寫入數據,再server2上是不同步的:
[root@server1 mysql]# cp /etc/passwd .
[root@server1 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock passwd test
[root@server2 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 lost+found mysql mysql.sock test
所以這個問題我們需要修改後端的存儲來進行解決。
使用 gfs 全局文件系統實現共享
GFS是紅帽官方開發的集羣文件系統,它允許多個服務同時對一個子盤分區進行讀寫,我們可以通過GFS來實現數據的集中管理。但是 gfs 是不能獨立存在的,需要rhcs底層套件的支持。
我們先disable掉前面的倆個資源組
首先我們要確保 clvmd (集羣邏輯卷)服務的開啓。
[root@server1 ~]# /etc/init.d/clvmd status
clvmd (pid 1305) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
編輯配置文件:
[root@server1 ~]# vim /etc/lvm/lvm.conf
# Type 3 uses built-in clustered locking. # 使用集羣鎖定
locking_type = 3
[root@server1 mysql]# lvmconf --enable-cluster # 或者使用命令開啓
這裏我們回顧一下lvs的一些知識:
# 創建邏輯卷
[root@server1 ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
[root@server1 ~]# vgcreate clustervg /dev/sdb1
Clustered volume group "clustervg" successfully created
[root@server1 ~]# lvcreate -L 4G -n demo clustervg
Logical volume "demo" created
[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo # 格式化
[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql/ # 掛載後我們就可以使用了
[root@server1 ~]# chown mysql.mysql /var/lib/mysql/
[root@server1 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 3 mysql mysql 4096 Jun 2 17:12 /var/lib/mysql/
[root@server1 ~]# umount /var/lib/mysql/
然後此時我們再server2上就可以同步的看到這個邏輯捲了;
[root@server1 ~]# lvextend -l +1023 /dev/clustervg/demo # 拓展
[root@server2 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 8.54g
lv_swap VolGroup -wi-ao---- 992.00m
demo clustervg -wi-a----- 8.00g # 這裏
[root@server1 ~]# e2fsck -f /dev/clustervg/demo # 文件系統檢測
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/clustervg/demo: 85/262144 files (1.2% non-contiguous), 56645/1048576 blocks
[root@server1 ~]# resize2fs /dev/clustervg/demo # 文件系統也拉伸至8G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/clustervg/demo to 2096128 (4k) blocks.
The filesystem on /dev/clustervg/demo is now 2096128 blocks long.
剛纔我們只是使用ext4文件系統進行了演示,現在我們使用全局文件系統。
[root@server1 ~]# lvremove /dev/clustervg/demo # 移除之前的邏輯卷
Do you really want to remove active clustered logical volume demo? [y/n]: y
Logical volume "demo" successfully removed
[root@server1 ~]# lvcreate -L 4G -n demo clustervg # 新建
Logical volume "demo" created
[root@server1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 8.54g
lv_swap VolGroup -wi-ao---- 992.00m
demo clustervg -wi-a----- 4.00g
# 格式化
[root@server1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t westos_ha:mygfs2 /dev/clustervg/demo
# -p 指定鎖 -j 有兩份日誌 -t 指定集羣名稱(clustat查看)
This will destroy any data on /dev/clustervg/demo.
It appears to contain: symbolic link to `../dm-2'
Are you sure you want to proceed? [y/n] y
Device: /dev/clustervg/demo
Blocksize: 4096
Device Size 4.00 GB (1048576 blocks)
Filesystem Size: 4.00 GB (1048575 blocks)
Journals: 2
Resource Groups: 16
Locking Protocol: "lock_dlm"
Lock Table: "westos_ha:mygfs2"
UUID: c695f347-0225-16fe-03c0-344cb79e047b
然後我們就可以使用了;
[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server1 ~]# chown mysql.mysql /var/lib/mysql/ # 切換了文件系統,所以要重新設定
[root@server1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 8813300 1136908 7228700 14% /
tmpfs 510200 31816 478384 7% /dev/shm
/dev/sda1 495844 33442 436802 8% /boot
/dev/mapper/clustervg-demo 4193856 264776 3929080 7% /var/lib/mysql
[root@server1 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 2 mysql mysql 3864 Jun 2 18:03 /var/lib/mysql/
[root@server2 ~]# blkid
/dev/mapper/clustervg-demo: LABEL="westos_ha:mygfs2" UUID="c695f347-0225-16fe-03c0-344cb79e047b" TYPE="gfs2" # gfs 文件系統
啓動mysql;
[root@server1 ~]# /etc/init.d/mysqld status
mysqld is stopped
[root@server2 ~]# /etc/init.d/mysqld status
mysqld is stopped ## 當前兩個mysql都爲關閉
[root@server2 ~]# /etc/init.d/mysqld start # 我們只開啓server2的msyql服務
[root@server2 ~]# cd /var/lib/mysql/
[root@server2 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
再server1中查看;
[root@server1 ~]# cd /var/lib/mysql/
[root@server1 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
## 可見雖然server1上沒有開啓mysqld但是也同步到了數據。
[root@server2 mysql]# touch file1
[root@server1 mysql]# ls
file1 ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
# 在 server2上建立文件,再serer1上也看到了。
現在我們在集羣中進行實現。
關閉server2的 mysqld 服務。
然後可以在fstab中寫入開機掛載。(server1 和 server2)
[root@server2 mysql]# vim /etc/fstab
[root@server2 mysql]# cat /etc/fstab
UUID="c695f347-0225-16fe-03c0-344cb79e047b" /var/lib/mysql gfs2 _netdev 0 0
[root@server1 ~]# umount /var/lib/mysql/ #卸載
[root@server2 ~]# umount /var/lib/mysql/
[root@server2 ~]# mount -a
[root@server2 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 8813300 1075980 7289628 13% /
tmpfs 510200 31816 478384 7% /dev/shm
/dev/sda1 495844 33442 436802 8% /boot
/dev/mapper/clustervg-demo 4193856 286512 3907344 7% /var/lib/mysql
# mount -a 後,又掛載上了
現在我們去web界面去配置資源:
刪除sql資源組中的dbdata這個資源,因爲我們之前指定的是ext4的文件系統。
然後添加gfs文件系統的資源;
然後把它加入到sql資源組中.
嘗試在server1中開啓,關閉mysqld ,並取消掛載掉server1的 卷,因爲開啓服務的時候會自動開啓。
server2中保持掛載。
[root@server1 ~]# umount /var/lib/mysql/ # 取消掛載
[root@server1 ~]# clusvcadm -e sql # 啓動資源組
Local machine trying to enable service:sql...Success
service:sql is now running on server1
[root@server1 ~]# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 172.25.254.200/24 scope global secondary eth0 # ip添加了
inet6 fe80::5054:ff:fe48:9286/64 scope link
valid_lft forever preferred_lft forever
[root@server1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 8813300 1136968 7228640 14% /
tmpfs 510200 31816 478384 7% /dev/shm
/dev/sda1 495844 33442 436802 8% /boot
/dev/mapper/clustervg-demo 4193856 286516 3907340 7% /var/lib/mysql # 掛載上了
並且此時我們複製一個文件過來:
[root@server1 mysql]# cp /etc/passwd .
[root@server1 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock passwd test
server2中查看:
[root@server2 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock passwd test
# 也出現了
[root@server2 mysql]# rm -fr passwd # 在server2中刪除
[root@server2 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test
[root@server1 mysql]# ls
ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test # server1中也看不到了
現在我們就可以同步了,集羣的文件系統搭建成功。
從這裏我們可以看出 lvm 和 iscsi 的 區別在於:
iscsi 不能實時同步,只能單點寫入,而lvm 就可以多點寫入,而且數據是同步的。避免了數據複製拷貝的麻煩,但是它不需要 rhcs 底層套件的支持。
就是說在我們每次配置集羣時,這些服務必須是開啓的,其中
clvmd 是提供lvm服務的
modclusterd 是確保web界面和後段進行通訊的 (/etc/cluster/cluster.conf),這裏面記錄了我們的操作。
ricci是集羣主服務,
rgmanager 提供資源組的管理
cman 提供集羣的管理