創建RHCS集羣,實現nginx高可用
準備兩臺裝了nginx服務的主機
在server5,6上安裝ricci
/etc/init.d/ricci strat
chkconfig ricci on
passwd ricci # 給ricci用戶設置密碼
在server5上安裝luci
/etc/init.d/luci start(有提示,在瀏覽器中輸入https://server5:8084)
chkconfig luci on
在server5,6上寫入nginx腳本 目錄:/etc/init.d/
chmod +x /etc/init.d/nginx # 給腳本一個可執行權限(有執行權限的文件爲綠色)
在測試完server5,6上的nginx服務完好之後,關閉nginx服務,以免影響集羣
在瀏覽器中輸入提示的網址(由安全證書,所以應該點擊advanced)
獲得證書:
用超級用戶登陸
一、創建集羣
將server5,6節點加入集羣(創建時你的虛擬機會自動重啓,所以再次之前一定要對服務做開機自啓,否則瀏覽器中的頁面將出現錯誤)
Cluster Name:集羣名字,名字隨便取的,不過自己要知道名字代表什麼
User the Same Password for ALL Nodes:勾上這個,代表所有的節點使用ricci的密碼都是一樣的,那麼你寫了一個密碼,剩下的系統會自動給你寫上
Download Packages:勾上這個代表RHCS客戶端在加入到集羣的時候,系統會自動的去安裝集羣相關的服務並且運行。
Reboot Nodes Before Joining Cluster:RHCS客戶端加入到集羣后,會重啓
Enable shared Storage Support:可以分享存儲支持
二、添加設置優先級
Failover Domains就是失效率、優先級的意思,數字越低,優先級越高,這個機器就先啓動服務
三、添加資源
(ip和script——->去設置nginx的執行腳本)
進入集羣高可用的管理界面
四、Server Groups
添加資源(add Resource),將前面添加的資源,添加在服務組裏,先添加ip,再添加服務,一定不能改變順序。添加完之後,刷新瀏覽器界面
驗證:在瀏覽器中輸入www.westos.org(前提你已經做過域名解析,若沒有做域名解析,請輸入ip地址)
clusvcadm -r nginx -m server6
這條命令也可以將運行的後臺放在server6上
關閉正在使用的server5的nginx,過5秒鐘之後,運行的後端主機會自動切換到server6上
注意:
1.保持兩臺主機同步,否則也會出錯,若出現不同步所導致的錯誤時,建議重新打開兩個虛擬機,保持同步
2.切忌不要手動開啓nginx服務,會引起集羣出錯,將所有的事情交給集羣去做
3.若不小心手動啓動了服務,則執行
clusvcadm -d nginx # 禁用這個服務
clusvcadm -e nginx # 重新起用這個服務
安裝配置fence
在物理機上安裝fence
yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 -y
systemctl start fence_virtd.service
fence_virtd -c # 配置fence
Module search path [/usr/lib64/fence-virt]:
Available backends:
libvirt 0.1
Available listeners:
multicast 1.2
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.
Interface [virbr0]: br0
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [libvirt]: libvirt
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
port = "1229";
family = "ipv4";
interface = "br0";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}
}
fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
生成認證鑰匙並scp到每一個節點:
dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
瀏覽器中添加fence
在server5,6中添加fence執行以下步驟
關閉其中一個網絡或執行echo c > /proc/sysrq-trigger,fence會對其自動斷電進行重啓