drbd+keepalived nfs高可用方案實踐

drbd+keepalived nfs高可用方案實踐

  • https://docs.linbit.com/ drbd官網
  • 環境 centos7
  • 主:192.168.212.10 (chy) (nfs+keepalived+drdb都在一臺機器上)
  • 備:192.168.212.11 (chy01)
  • 客戶端:192.168.212.12 (chy02)
  • 架構圖如下:
    mark
  • 基礎準備工作

    1. 確保所有服務器時間同步。
      yum -y install ntp 
      ntpdate -u time.nist.gov (時鐘同步命令)

      2.確保所有服務器的防火牆,Selinux關閉了。

    2. 且主機名配置到位,能夠根據主機名知道服務器角色。
    3. 確保所有服務器hosts裏面能夠解析任意一臺服務器的hostname。
      說明:如上簡單的命令這就不做介紹了
  • 搭建DRDB
    1. drbd的原理不贅述了,這裏我們採用C協議(backup端網絡接收到後寫入磁盤再返回OK狀態給Master)。
    2. 開始安裝DRDB (如下的操作主備都需要操作,我在這用的是yum安裝方式沒有用源碼編譯)
      [root@chy ~]# yum -y update kernel kernel-devel
      [root@chy ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
      [root@chy ~]# yum -y install drbd84-utils kmod-drbd84

      創建磁盤分區(主備同時操作)主備都安裝drbd以後,我們就開始格式化磁盤。這裏我把/dev/sdb直接分成主分區,大小爲20G,在這基礎之上,做了LVM卷,劃分大小爲10G。

      [root@chy ~]# lsblk //列出所有塊設備
      NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda           8:0    0   20G  0 disk 
      ├─sda1        8:1    0  200M  0 part /boot
      └─sda2        8:2    0 18.9G  0 part 
      ├─cl-root 253:0    0    9G  0 lvm  /
      ├─cl-swap 253:1    0  800M  0 lvm  [SWAP]
      ├─cl-home 253:2    0  500M  0 lvm  /home
      └─cl-var  253:3    0  8.6G  0 lvm  /var
      sdb           8:16   0   20G  0 disk 
      sr0          11:0    1  4.1G  0 rom  
      [root@chy ~]# fdisk /dev/sdb
      [root@chy ~]# pvcreate /dev/sdb1
      Physical volume "/dev/sdb" successfully created.
      [root@chy ~]# vgcreate nfsdisk /dev/sdb1
      [root@chy ~]# lvcreate -L 10G -n nfsvolume nfsdisk
      Logical volume "nfsvolume" created.
      [root@chy01 ~]# lsblk
      NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda                     8:0    0   20G  0 disk 
      ├─sda1                  8:1    0  200M  0 part /boot
      └─sda2                  8:2    0 18.9G  0 part 
      ├─cl-root           253:0    0    9G  0 lvm  /
      ├─cl-swap           253:1    0  800M  0 lvm  [SWAP]
      ├─cl-home           253:2    0  500M  0 lvm  /home
      └─cl-var            253:3    0  8.6G  0 lvm  /var
      sdb                     8:16   0   20G  0 disk 
      └─sdb1                  8:17   0   20G  0 part 
      └─nfsdisk-nfsvolume 253:4    0  100M  0 lvm  
      sr0                    11:0    1  4.1G  0 rom  
      [root@chy drbd.d]# ls -l /dev/nfsdisk/nfsvolume //查看是否創建成功後續要用
      lrwxrwxrwx 1 root root 7 12月 19 00:29 /dev/nfsdisk/nfsvolume -> ../dm-4
      [root@chy ~]# lvdisplay //查看信息

      在主上做如下的配置(drbd)

      
      [root@chy etc]# cat drbd.conf 
      # You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";#註釋掉這行,避免和我們自己寫的配置產生衝突。
include "drbd.d/.res";
include "drbd.d/
.cfg"; 增加一行cfg
[root@chy drbd.d]# cat drbd_basic.cfg ##主要配置文件 (在/etc/drbd.d 下創建drbd_basic.cfg)
global {
usage-count yes; # 是否參與DRBD使用者統計,默認爲yes,yes or no都無所謂
}

common {
syncer { rate 30M; } # 設置主備節點同步的網絡速率最大值,默認單位是字節,我們可以設定爲兆
}
resource r0 { # r0爲資源名,我們在初始化磁盤的時候就可以使用資源名來初始化。
protocol C; #使用 C 協議。
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f ";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f ";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
fence-peer "/usr/lib4/heartbeat/drbd-peer-outdater -t 5";
pri-lost "echo pri-lst. Have a look at the log file. | mail -s 'Drbd Alert' root";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
}
net {
cram-hmac-alg "sha1";
shared-secret "NFS-HA";
# drbd同步時使用的驗證方式和密碼信息
}
disk {
on-io-error detach;
fencing resource-only;

使用DOPD(drbd outdate-peer deamon)功能保證數據不同步的時候不進行切換。

}
startup {
    wfc-timeout 120;
    degr-wfc-timeout 120;
}
device /dev/drbd0;   # 這裏/dev/drbd0是用戶掛載時的設備名字,由DRBD進程創建

on chy {    #每個主機名的說明以on開頭,後面是hostname(必須在/etc/hosts可解析)
    disk /dev/nfsdisk/nfsvolume;   # 使用這個磁盤作爲/dev/nfsdisk/nfsvolume的磁盤/dev/drbd0。(這個就是我們之前創建的一定要保證存在要不後續會出錯)
    address 192.168.212.10:7788;     #設置DRBD的監聽端口,用於與另一臺主機通信
    meta-disk internal;     # drbd的元數據存放方式
}

on chy01 {
    disk /dev/nfsdisk/nfsvolume;
    address 192.168.212.11:7788;
    meta-disk internal;
}

}
[root@chy drbd.d]# drbdadm create-md r0 //創建元數據庫
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created. ##創建成功
success

#上面提示的命令如下,不必要執行:
useradd haclient
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84
chgrp haclient /sbin/drbdmeta
chmod o-x /sbin/drbdmeta
chmod u+s /sbin/drbdmeta
[root@chy drbd.d]# service drbd start
[root@chy drbd.d]# ps aux |grep drbd
root 11624 0.0 0.0 0 0 ? S< 02:33 0:00 [drbd-reissue]
root 60363 0.0 0.0 0 0 ? S< 05:36 0:00 [drbd0_submit]
root 64193 0.0 0.0 112660 972 pts/0 R+ 05:51 0:00 grep --color=auto drbd

[root@chy drbd.d]# drbdadm primary all
#把當前服務器設置爲primary狀態(主節點),如果這一步執行不成功,那麼執行這個命令“drbdadm -- --overwrite-data-of-peer primary all”
[root@chy drbd.d]# drbdadm primary all
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

0: State change failed: (-2) Need access to UpToDate data
Command 'drbdsetup-84 primary 0' terminated with exit code 17
[root@chy drbd.d]# drbdadm up r0 //
[root@chy drbd.d]# drbdadm primary r0 --force //強制設置成主節點

如果可以正常啓動,那麼就把/etc/drbd.d/drbd_basic.cfg和/etc/drbd.conf複製到備的機器上我用的是scp

[root@chy01 drbd.d]# scp 192.168.212.10:/etc/drbd.conf /etc/drbd.conf
drbd.conf 100% 158 0.2KB/s 00:00
[root@chy01 drbd.d]# scp 192.168.212.10:/etc/drbd.d/drbd_basic.cfg /etc/drbd.d/drbd_basic.cfg
drbd_basic.cfg

使用drbdadm create-md r0 創建元數據庫在/dev/mapper/nfsdisk-nfsvolume (在備的機器上操作)

[root@chy01 drbd.d]# drbdadm create-md r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

initializing activity log
initializing bitmap (320 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success

1. 這裏我就簡要說說11(備)做的操作了,其實兩者做的操作也一樣。
格式化磁盤,使用配置文件指定的/dev/sdb1,兩者的容量要大小相同。
2. 使用drbdadm create-md r0 創建元數據庫在/dev/sdb1。
3. 啓動服務。service drbd start。
4. 查看狀態。service drbd status.
在backup服務器上做完上述說的操作後,我們在bakcup服務器查看drbd的狀態:

[root@chy01 drbd.d]# service drbd status
Redirecting to /bin/systemctl status drbd.service
● drbd.service - DRBD -- please disable. Unless you are NOT using a cluster manager.
Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2017-12-15 06:12:37 CST; 3min 46s ago
Process: 3995 ExecStart=/lib/drbd/drbd start (code=exited, status=0/SUCCESS)
Main PID: 3995 (code=exited, status=0/SUCCESS)

Dec 15 06:12:37 chy01 drbd[3995]: to be able to call drbdsetup and drbdmeta with root privileges.
Dec 15 06:12:37 chy01 drbd[3995]: You need to fix this with these commands:
Dec 15 06:12:37 chy01 drbd[3995]: chgrp haclient /lib/drbd/drbdsetup-84
Dec 15 06:12:37 chy01 drbd[3995]: chmod o-x /lib/drbd/drbdsetup-84
Dec 15 06:12:37 chy01 drbd[3995]: chmod u+s /lib/drbd/drbdsetup-84
Dec 15 06:12:37 chy01 drbd[3995]: chgrp haclient /usr/sbin/drbdmeta
Dec 15 06:12:37 chy01 drbd[3995]: chmod o-x /usr/sbin/drbdmeta
Dec 15 06:12:37 chy01 drbd[3995]: chmod u+s /usr/sbin/drbdmeta
Dec 15 06:12:37 chy01 drbd[3995]: .
Dec 15 06:12:37 chy01 systemd[1]: Started DRBD -- please disable. Unless you are NOT using a cluster manager..
[root@chy01 drbd.d]# drbdadm dstate r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

Inconsistent/UpToDate
[root@chy01 drbd.d]# drbdadm role r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

Secondary/Primary
[root@chy drbd.d]# service drbd status # drbd master上查看
Redirecting to /bin/systemctl status drbd.service
● drbd.service - DRBD -- please disable. Unless you are NOT using a cluster manager.
Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled; vendor preset: disabled)
Active: active (exited) since 五 2017-12-15 02:33:20 CST; 3h 40min ago
Main PID: 11620 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/drbd.service

12月 15 02:33:20 chy systemd[1]: Starting DRBD -- please disable. Unless you are NOT using a cluster manager....
12月 15 02:33:20 chy drbd[11620]: Starting DRBD resources: no resources defined!
12月 15 02:33:20 chy drbd[11620]: no resources defined!
12月 15 02:33:20 chy drbd[11620]: WARN: stdin/stdout is not a TTY; using /dev/consoleWARN: stdin/stdout is...ined!
12月 15 02:33:20 chy drbd[11620]: .
12月 15 02:33:20 chy systemd[1]: Started DRBD -- please disable. Unless you are NOT using a cluster manager..
Hint: Some lines were ellipsized, use -l to show in full.
[root@chy drbd.d]# drbdadm dstate r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

UpToDate/Inconsistent
[root@chy drbd.d]# drbdadm role r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

Primary/Secondary

掛載DRBD的磁盤(drbd master)上操作:

[root@chy drbd.d]# mkfs.ext4 /dev/drbd0 //需要格式化
mke2fs 1.42.9 (28-Dec-2013)
文件系統標籤=
OS type: Linux
塊大小=4096 (log=2)
分塊大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621351 blocks
131067 blocks (5.00%) reserved for the super user
第一個數據塊=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: 完成
正在寫入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
[root@chy drbd.d]# mkdir /database
[root@chy drbd.d]# mount /dev/drbd0 /database/
[root@chy drbd.d]# df -h
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/cl-root 8.8G 7.6G 748M 92% /
devtmpfs 737M 0 737M 0% /dev
tmpfs 748M 4.0K 748M 1% /dev/shm
tmpfs 748M 8.6M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/sda1 190M 136M 41M 77% /boot
/dev/mapper/cl-var 8.4G 3.5G 4.5G 44% /var
/dev/mapper/cl-home 497M 66M 431M 14% /home
tmpfs 150M 0 150M 0% /run/user/0
/dev/drbd0 9.8G 37M 9.2G 1% /database
[root@chy ~]# cat /proc/drbd //centos7用這個查看drbd當前主的狀態
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Primary/Secondary ds:Diskless/UpToDate C r-----
ns:135468 nr:1440 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[root@chy01 ~]# cat /proc/drbd //centos7用這個查看drbd當前主的狀態
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Secondary/Primary ds:UpToDate/Diskless C r-----
ns:1440 nr:135468 dw:10620872 dr:1440 al:125 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10485404

主端掛載完後去格式化備端和掛載,主要是檢測備端是否能夠正常掛載和使用,格式化之前需要主備狀態切換,切換的方法請看下面的DRBD設備角色切換的內容。下面就在備端格式化和掛載磁盤。(備端也可以這樣掛載磁盤,但是掛載的前提是備端切換成master端。只有master端可以掛載磁盤。)
DRBD設備角色切換

DRBD設備在角色切換之前,需要在主節點執行umount命令卸載磁盤先,然後再把一臺主機上的DRBD角色修改爲Primary,最後把當前節點的磁盤掛載

第一種方法:
在192.168.212.10上操作(當前是primary)。

[root@chy ~]# umount /dev/drbd0
[root@chy ~]# drbdadm secondary r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta
[root@chy ~]# cat /proc/drbd
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Secondary/Secondary ds:Diskless/UpToDate C r-----
ns:135552 nr:1917 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
發現兩臺都是備機
在192.168.212.11上操作。
[root@chy01 ~]# drbdadm primary r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta
[root@chy01 ~]# cat /proc/drbd
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Primary/Secondary ds:UpToDate/Diskless C r-----
ns:1917 nr:135552 dw:10784752 dr:2829 al:148 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10485404
已經切換成功
[root@dbbackup136 ~]# mkfs.ext4 /dev/drbd0 # 在沒有切換成primary狀態的時候,是沒法格式化磁盤的。
第二種方法:
在192.168.212.11上操作(當前是primary)。
[root@dbbackup136 ~]# service drbd stop # 先停止服務
Stopping all DRBD resources:
[root@chy01 ~]# service drbd start //之後在啓動

在192.168.212.10上操作

[root@chy ~]# drbdadm primary r0
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

[root@chy ~]# cat /proc/drbd
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Primary/Secondary ds:Diskless/UpToDate C r-----
ns:135552 nr:2829 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
一般我用的是第一種方法

說明:如上只是一個來回切換的小測試,接下來開始做keepalived  
keepalived主上的操作(具體的配置介紹我這就不說了,http://blog.51cto.com/chy940405/2052014 這裏是我更新的文章有需要的可以看看)

[root@chy keepalived]# cat keepalived.conf

global_defs{
notification_email {br/>[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_scriptchk_nfs {
script "/etc/keepalived/check_nfs.sh"
interval 5
}
vrrp_instance VI_1 {
state MASTER
interface br0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass chylinux>com
}
virtual_ipaddress {
192.168.212.100 //vip的地址
}
track_script {
chk_nfs
}
notify_master /etc/keepalived/notify_master.sh //master的腳本
notify_stop /etc/keepalived/notify_stop.sh
}
[root@chy keepalived]# cat check_nfs.sh
#!/bin/sh

###檢查nfs可用性:進程和是否能夠掛載
/sbin/service nfs status &>/dev/null
if [ $? -ne 0 ];then
###如果服務狀態不正常,先嚐試重啓服務
/sbin/service nfs status &>/dev/null
if [ $? -ne 0 ];then
###若重啓nfs服務後,仍不正常
###卸載drbd0設備
umount /dev/drbd0
###將drbd主降級爲備
drbdadm secondary r0
#關閉keepalived
/sbin/service keepalived stop
fi
fi
[root@chy keepalived]# cat notify_master.sh
#!/bin/bash

time=date "+%F %H:%M:%S"
echo -e "$time ------notify_master------\n" >> /etc/keepalived/logs/notify_master.log
/sbin/drbdadm primary r0 &>> /etc/keepalived/logs/notify_master.log
/bin/mount /dev/drbd0 /database &>> /etc/keepalived/logs/notify_master.log
/sbin/service nfs restart &>> /var/log/master.log
echo -e "\n" >> /etc/keepalived/logs/notify_master.log

備的操作

[root@chy01 keepalived]# cat keepalived.conf

global_defs{
notification_email {br/>[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_scriptchk_nfs {
script "/etc/keepalived/check_nfs.sh"
interval 5
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass chylinux
}
virtual_ipaddress {
192.168.212.100
}
track_script {
chk_nfs
}

notify_master /etc/keepalived/notify_master.sh

notify_backup /etc/keepalived/notify_backup.sh
}
[root@chy01 keepalived]# cat check_nfs.sh
#!/bin/sh

###檢查nfs可用性:進程和是否能夠掛載
/sbin/service nfs status &>/dev/null
if [ $? -ne 0 ];then
###如果服務狀態不正常,先嚐試重啓服務
/sbin/service nfs status &>/dev/null
if [ $? -ne 0 ];then
###若重啓nfs服務後,仍不正常
###卸載drbd0設備
umount /dev/drbd0
###將drbd主降級爲備
drbdadm secondary r0
#關閉keepalived
/sbin/service keepalived stop
fi
fi
[root@chy01 keepalived]# cat notify_backup.sh
#!/bin/bash

time=date "+%F %H:%M:%S"
echo -e "$time ------notify_backup------\n" >> /etc/keepalived/logs/notify_backup.log
/sbin/service nfs stop &>> /etc/keepalived/logs/notify_backup.log
/bin/umount /database &>> /etc/keepalived/logs/notify_backup.log
/sbin/drbdadm secondary all &>> /etc/keepalived/logs/notify_backup.log
echo -e "\n" >> /etc/keepalived/logs/notify_backup.log
[root@chy01 keepalived]# cat notify_master.sh
#!/bin/bash
time=date "+%F %H:%M:%S"
echo -e "$time ------notify_master------\n" >> /etc/keepalived/logs/notify_master.log
/sbin/drbdadm primary r0 &>> /etc/keepalived/logs/notify_master.log
/bin/mount /dev/drbd0 /database &>> /etc/keepalived/logs/notify_master.log
/sbin/service nfs restart &>> /var/log/master.log
echo -e "\n" >> /etc/keepalived/logs/notify_master.log
關閉主上keepalived,會按照預期流程走。關閉主上nfs—-卸載資源設備—-主drbd降級—-備drdb升級—-備掛載資源設備—-備啓動nfs服務,腳本的大概含義

nfs主的操作

[root@chy ~]# cat /etc/exports
/database 192.168.212.12/24(rw,sync,no_root_squash)
我這裏共享的目錄是database目錄,爲什麼要共享這個呢,是因爲我之前將drbd掛載這個目錄了
[root@chy ~]# exportfs -avr
exporting 192.168.212.12/24:/database

nfs備的操作就直接將主的scp即可

在nfs的server端的操作

[root@chy02 mnt]# showmount -e 192.168.212.100
Export list for 192.168.212.100:
/database 192.168.212.12/24
[root@chy02 mnt]# mount -t nfs 192.168.212.100:/database/ /mnt
[root@chy02 mnt]# df -h
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/cl-root 8.8G 5.7G 2.6G 69% /
devtmpfs 737M 0 737M 0% /dev
tmpfs 748M 0 748M 0% /dev/shm
tmpfs 748M 8.6M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/sda1 190M 107M 70M 61% /boot
/dev/mapper/cl-var 8.4G 276M 7.7G 4% /var
/dev/mapper/cl-home 497M 26M 472M 6% /home
tmpfs 150M 0 150M 0% /run/user/0
192.168.212.100:/database 93M 1.5M 85M 2% /mnt
[root@chy02 mnt]# touch 1.19

在主備上開始進行測試(如下的介紹都有測試有關)

[root@chy keepalived]# df -h
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/cl-root 8.8G 7.6G 744M 92% /
devtmpfs 737M 0 737M 0% /dev
tmpfs 748M 4.0K 748M 1% /dev/shm
tmpfs 748M 26M 722M 4% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/sda1 190M 136M 41M 77% /boot
/dev/mapper/cl-var 8.4G 3.1G 4.9G 39% /var
/dev/mapper/cl-home 497M 66M 431M 14% /home
tmpfs 150M 0 150M 0% /run/user/0
/dev/drbd0 93M 1.6M 85M 2% /database
[root@chy keepalived]# cd /database/
[root@chy database]# ls
~ 111 112 1.19 12.19 222 333 444 bbb

現在將主的keepalivedd停掉,在server端看是否還能操作

[root@chy database]# systemctl stop keepalived
[root@chy database]# ps aux |grep keepalived
root 83980 0.0 0.0 112660 980 pts/1 R+ 06:53 0:00 grep --color=auto keepalived
[root@chy database]# cat /proc/drbd
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:29 nr:56 dw:85 dr:3674 al:2 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
好奇怪怎麼這樣了?不是應該切換過去?不用擔心我們可以看看日誌

[root@chy database]# cat /etc/keepalived/logs/notify_stop.log
這是之前在腳本里面定義的日誌,在日誌裏面查看到了如下的報錯信息
Redirecting to /bin/systemctl stop nfs.service
umount: /database:目標忙。
(有些情況下通過 lsof(8) 或 fuser(1) 可以
找到有關使用該設備的進程的有用信息)
WARN:
You are using the 'drbd-peer-outdater' as fence-peer program.
If you use that mechanism the dopd heartbeat plugin program needs
to be able to call drbdsetup and drbdmeta with root privileges.

You need to fix this with these commands:
chgrp haclient /lib/drbd/drbdsetup-84
chmod o-x /lib/drbd/drbdsetup-84
chmod u+s /lib/drbd/drbdsetup-84

chgrp haclient /usr/sbin/drbdmeta
chmod o-x /usr/sbin/drbdmeta
chmod u+s /usr/sbin/drbdmeta

0: State change failed: (-12) Device is held open by someone
Command 'drbdsetup-84 secondary 0' terminated with exit code 11
[root@chy database]# df -h
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/cl-root 8.8G 7.6G 744M 92% /
devtmpfs 737M 0 737M 0% /dev
tmpfs 748M 4.0K 748M 1% /dev/shm
tmpfs 748M 26M 722M 4% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/sda1 190M 136M 41M 77% /boot
/dev/mapper/cl-var 8.4G 3.1G 4.9G 39% /var
/dev/mapper/cl-home 497M 66M 431M 14% /home
tmpfs 150M 0 150M 0% /run/user/0
/dev/drbd0 93M 1.6M 85M 2% /database
[root@chy database]# umount /database/ 手動卸載也是不可以的
umount: /database:目標忙。
(有些情況下通過 lsof(8) 或 fuser(1) 可以
找到有關使用該設備的進程的有用信息)
解決方法如下:
[root@chy database]# fuser -m /dev/drbd0/
/dev/drbd0: 595c
[root@chy database]# ps aux |grep 595
root 595 0.0 0.1 115720 2340 pts/1 Ss 02:31 0:00 -bash
root 90942 0.0 0.0 112660 976 pts/1 R+ 07:02 0:00 grep --color=auto 595
[root@chy database]# kill 595
[root@chy keepalived]# sh -x notify_stop.sh
++ date '+%F %H:%M:%S'

  • time='2017-12-19 07:04:19'
  • echo -e '2017-12-19 07:04:19 ------notify_stop------\n'
  • /sbin/service nfs stop
  • /bin/umount /database
  • /sbin/drbdadm secondary all
  • echo -e '\n'
    [root@chy keepalived]# df -h 查看已經不掛載了
    文件系統 容量 已用 可用 已用% 掛載點
    /dev/mapper/cl-root 8.8G 7.6G 744M 92% /
    devtmpfs 737M 0 737M 0% /dev
    tmpfs 748M 4.0K 748M 1% /dev/shm
    tmpfs 748M 26M 722M 4% /run
    tmpfs 748M 0 748M 0% /sys/fs/cgroup
    /dev/sda1 190M 136M 41M 77% /boot
    /dev/mapper/cl-var 8.4G 3.1G 4.9G 39% /var
    /dev/mapper/cl-home 497M 66M 431M 14% /home
    tmpfs 150M 0 150M 0% /run/user/0
    [root@chy01 ~]# cat /proc/drbd //在備上查看已經正常切換了
    version: 8.4.10-1 (api:1/proto:86-101)
    GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22
    0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:57 nr:113 dw:170 dr:3665 al:2 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
    [root@chy01 ~]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/cl-root 8.8G 5.8G 2.6G 69% /
    devtmpfs 737M 0 737M 0% /dev
    tmpfs 748M 0 748M 0% /dev/shm
    tmpfs 748M 17M 731M 3% /run
    tmpfs 748M 0 748M 0% /sys/fs/cgroup
    /dev/sda1 190M 135M 41M 77% /boot
    /dev/mapper/cl-var 8.4G 710M 7.3G 9% /var
    /dev/mapper/cl-home 497M 66M 432M 14% /home
    tmpfs 150M 0 150M 0% /run/user/0
    /dev/drbd0 93M 1.6M 85M 2% /database
    [root@chy02 ~]# cd /mnt/
    [root@chy02 mnt]# ls
    ~ 111 112 1.19 12.19 222 333 444 bbb
    數據都在,在此高可用已經完成
    
    如上已經基本完成,**但是有個問題還沒還沒有解決就是如果主節點斷電或者直接關機,則會導致主備切換異常,不知道其它夥伴是否有好的辦法可以找我一起探討,期待有好的方法。**
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章