集羣基礎-----(fence的安裝)

在物理機上
1.用7.2鏡像安裝下列軟件
yum install fence-virtd-multicast.x86_64 fence-virtd.x86_64 fence-virtd-libvirt.x86_64 -y(網絡監聽器、電源管理器、內核層面控制虛擬機,關掉的話用戶空間不能訪問)

2.修改fence的配置:
fence_virtd -c

Module search path [/usr/lib64/fence-virt]: --可用模塊

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]: --監聽的模塊

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

  1. a) cd /etc/cluster ##生成key之後文件所在的目錄
    b) dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1 ##使用本機存在的取隨機數的命令生成相應的key

4.
systemctl restart fence_virtd.service (確保火牆允許服務iptables -L)

    netstat -anulp | grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           8163/fence_virtd    
  1. scp fence_xvm.key [email protected]:/etc/cluster/

    scp fence_xvm.key [email protected]:/etc/cluster/

節點一(server1)和節點四(server4)上相同
1.
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf cman-notify.d fence_xvm.key
[root@server1 cluster]# ll
total 12
-rw-r—– 1 root root 261 Jul 22 17:00 cluster.conf
drwxr-xr-x 2 root root 4096 Sep 16 2013 cman-notify.d
-rw-r–r– 1 root root 128 Jul 24 10:51 fence_xvm.key
[root@server1 cluster]# cat cluster.conf ##以下操作在網頁設置,自己更新到配置文件中

<?xml version="1.0"?>
<cluster config_version="7" name="hahaha ">
    <clusternodes>
        <clusternode name="server1" nodeid="1">
            <fence>
                <method name="fence1">
                    <device domain="vm1" name="vmfence"/>
                </method>
            </fence>
        </clusternode>
        <clusternode name="server4" nodeid="2">
            <fence>
                <method name="fence4">
                    <device domain="vm4" name="vmfence"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>

2.在兩個結點上安裝httpd

1 ) yum install -y httpd
2) ip addr
ps ax
3) cd /var/www/html/
ls
4) vim index.html ##編寫
vim /etc/httpd/conf/httpd.conf ##查看端口是否是80
ip addr

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章