紅帽企業集羣和存儲管理之DRBD+Heartbeat+NFS實現詳解

 紅帽企業集羣和存儲管理之

DRBD+Heartbeat+NFS實現詳解
案例應用背景
本實驗部署DRBD + HEARDBEAT + NFS 環境,建立一個高可用(HA)的文件服務器集羣。在方案中,通過DRBD保證了服務器數據的完整性和一致性。DRBD類似於一個網絡RAID-1功能。當你將數據寫入本地文件系統時,數據還將會被髮送到網絡中另一臺主機上,以相同的形式記錄在一個另文件系統中。主節點與備節點的數據可以保證實時相互同步。當本地主服務器出現故障時,備份服務器上還會保留有一份相同的數據,可以繼續使用。在高可用(HA)中使用DRBD功能,可以代替使用一個共享盤陣。因爲數據同時存在於本地主服務器和備份服務器上。切換時,遠程主機只要使用它上面的那份備份數據,就可以繼續提供主服務器上相同的服務,並且client用戶對主服務器的故障無感知。
案例應用簡化拓撲圖
軟件包下載地址:http://down.51cto.com/data/402474

案例應用具體實現步驟:
一.網絡的基本配置
1.1 node1基本配置&新建磁盤
1.1.1查看系統信息,同步時間

[root@node1 ~]# uname -rv

2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009

[root@node1 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 5.4 (Tikanga)

[root@node1 ~]# hwclock -s

[root@node1 ~]# date

Wed Feb 8 13:55:50 CST 2012

1.1.2查看主機名稱,修改查看ip地址

[root@node1 ~]# hostname

node1.junjie.com

[root@node1 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node1.junjie.com

[root@node1 ~]#setup

[root@node1 ~]# service network restart

Shutting down interface eth0: [ OK ]

Shutting down loopback interface: [ OK ]

Bringing up loopback interface: [ OK ]

Bringing up interface eth0: [ OK ]

[root@node1 ~]# ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:0C:29:AE:83:D1

inet addr:192.168.101.211 Bcast:192.168.101.255 Mask:255.255.255.0

1.1.3配置/etc/hosts文件(就不用dns了)

[root@node1 ~]# echo "192.168.101.211 node1.junjie.com node1" >>/etc/hosts

[root@node1 ~]# echo "192.168.101.212 node2.junjie.com node2" >>/etc/hosts

1.1.4構建一個新的磁盤空間有利於實現DRBD技術

[root@node1 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1318 10482412+ 83 Linux

/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris

[root@node1 ~]# fdisk /dev/sda

p/n/p//+1000M/p/w

[root@node1 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1318 10482412+ 83 Linux

/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris

/dev/sda4 1580 1702 987997+ 83 Linux

[root@node1 ~]# partprobe /dev/sda

[root@node1 ~]# cat /proc/partitions

major minor #blocks name

 

8 0 20971520 sda

8 1 104391 sda1

8 2 10482412 sda2

8 3 2096482 sda3

8 4 987997 sda4

1.2 node2基本配置&新建磁盤

1.2.1查看系統信息,同步時間

[root@node2 ~]# uname -rv

2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009

[root@node2 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 5.4 (Tikanga)

[root@node2 ~]# hwclock -s

[root@node2 ~]# date

Wed Feb 8 14:02:22 CST 2012

1.2.2查看主機名稱,修改查看ip地址

[root@node2 ~]# hostname

node2.junjie.com

[root@node2 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node2.junjie.com

[root@node2 ~]# setup

[root@node2 ~]# service network restart

Shutting down interface eth0: [ OK ]

Shutting down loopback interface: [ OK ]

Bringing up loopback interface: [ OK ]

Bringing up interface eth0: [ OK ]

[root@node2 ~]# ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:0C:29:D1:D4:32

inet addr:192.168.101.212 Bcast:192.168.101.255 Mask:255.255.255.0

1.2.3配置/etc/hosts文件(就不用dns了)

[root@node2 ~]# echo "192.168.101.211 node1.junjie.com node1" >>/etc/hosts

[root@node2 ~]# echo "192.168.101.212 node2.junjie.com node2" >>/etc/hosts

1.2.4構建一個新的磁盤空間有利於實現DRBD技術

[root@node2 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1318 10482412+ 83 Linux

/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris

[root@node2 ~]# fdisk /dev/sda

p/n/p//+1000M/p/w

[root@node2 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 1318 10482412+ 83 Linux

/dev/sda3 1319 1579 2096482+ 82 Linux swap / Solaris

/dev/sda4 1580 1702 987997+ 83 Linux

[root@node2 ~]# partprobe /dev/sda

[root@node2 ~]# cat /proc/partitions

major minor #blocks name

 

8 0 20971520 sda

8 1 104391 sda1

8 2 10482412 sda2

8 3 2096482 sda3

8 4 987997 sda4

1.3 在node1和node2上配置ssh密鑰信息,

有利於以後在一個節點對另一節點直接操作。

1.3.1 在node1上配置ssh密鑰信息

[root@node1 ~]# ssh-keygen -t rsa

[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]

1.3.2 在node2上配置ssh密鑰信息

[root@node2 ~]# ssh-keygen -t rsa

[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]

二、DRBD安裝配置步驟
在node1和node2做以下操作:
我下載的軟件包是:(我放在/root/下了)

drbd83-8.3.8-1.el5.centos.i386.rpm

kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
 

2.1、安裝DRBD 套件

[root@node1 ~]# rpm -ivh drbd83-8.3.8-1.el5.centos.i386.rpm

[root@node1 ~]# rpm -ivh kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

 

[root@node2 ~]# rpm -ivh drbd83-8.3.8-1.el5.centos.i386.rpm

[root@node2 ~]# rpm -ivh kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

2.2、加載DRBD 模塊

[root@node1 ~]# modprobe drbd

[root@node1 ~]# lsmod | grep drbd

drbd 228528 0

[root@node1 ~]#

 

[root@node2 ~]# modprobe drbd

[root@node2 ~]# lsmod | grep drbd

drbd 228528 0

[root@node2 ~]#

2.3、修改配置文件
drbd.conf配置文件DRBD運行時,會讀取一個配置文件/etc/drbd.conf.這個文件裏描述了DRBD設備與硬盤分區的映射關係

2.3.1 node1上作以下配置

 

[root@node1 ~]# cd /etc/drbd.d/

[root@node1 drbd.d]# ll

total 4

-rwxr-xr-x 1 root root 1418 Jun 4 2010 global_common.conf

[root@node1 drbd.d]# cp global_common.conf global_common.conf.bak

2.3.2 複製配置到node2上:

[root@node1 drbd.d]# scp /etc/drbd.conf node2:/etc/

drbd.conf 100% 133 0.1KB/s 00:00

[root@node1 drbd.d]# scp /etc/drbd.d/* node2:/etc/drbd.d/

global_common.conf 100% 427 0.4KB/s 00:00

global_common.conf.bak 100% 1418 1.4KB/s 00:00

nfs.res 100% 330 0.3KB/s 00:00

[root@node1 drbd.d]#

2.4、 檢測配置文件
// 檢測配置文件

[root@node1 drbd.d]# drbdadm adjust nfs

0: Failure: (119) No valid meta-data signature found.

 

==> Use 'drbdadm create-md res' to initialize meta-data area. <==

 

Command 'drbdsetup 0 disk /dev/sda4 /dev/sda4 internal --set-defaults --create-device --fencing=resource-only --on-io-error=detach' terminated with exit code 10

[root@node1 drbd.d]# drbdadm adjust nfs

drbdsetup 0 show:5: delay-probe-volume 0k => 0k out of range [4..1048576]k.

[root@node1 drbd.d]#

2.5、創建nfs 的資源

2.5.1 在node1上創建nfs 的資源

[root@node1 drbd.d]# drbdadm create-md nfs

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node1 drbd.d]# ll /dev/drbd0

brw-r----- 1 root disk 147, 0 Feb 8 14:27 /dev/drbd0

2.5.2 在node2上創建nfs 的資源

[root@node1 drbd.d]# ssh node2.junjie.com 'drbdadm create-md nfs'

NOT initialized bitmap

Writing meta data...

initializing activity log

New drbd meta data block successfully created.

[root@node1 drbd.d]# ssh node2.junjie.com 'ls -l /dev/drbd0'

brw-r----- 1 root disk 147, 0 Feb 8 14:19 /dev/drbd0

2.6 啓動DRBD服務

[root@node1 drbd.d]# service drbd start

Starting DRBD resources: drbdsetup 0 show:5: delay-probe-volume 0k => 0k out of range [4..1048576]k.

[root@node1 drbd.d]# ssh node2.junjie.com 'service drbd start'

Starting DRBD resources: drbdsetup 0 show:5: delay-probe-volume 0k => 0k out of range [4..1048576]k.

[root@node1 drbd.d]#

2.7 啓動DRBD服務,查看DRBD狀態

[root@node1 drbd.d]# service drbd status

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16

m:res cs ro ds p mounted fstype

0:nfs Connected Secondary/Secondary Inconsistent/Inconsistent C

[root@node1 drbd.d]# ssh node2.junjie.com 'service drbd status'

drbd driver loaded OK; device status:

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by [email protected], 2010-06-04 08:04:16

m:res cs ro ds p mounted fstype

0:nfs Connected Secondary/Secondary Inconsistent/Inconsistent C

[root@node1 drbd.d]#

 

[root@node1 drbd.d]# drbd-overview

0:nfs Connected Secondary/Secondary Inconsistent/Inconsistent C r----

[root@node1 drbd.d]# ssh node2.junjie.com 'drbd-overview'

0:nfs Connected Secondary/Secondary Inconsistent/Inconsistent C r----

[root@node1 drbd.d]#

 

[root@node1 drbd.d]# chkconfig drbd on

[root@node1 drbd.d]# chkconfig --list drbd

drbd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

[root@node1 drbd.d]# ssh node2.junjie.com 'chkconfig drbd on'

[root@node1 drbd.d]# ssh node2.junjie.com 'chkconfig --list drbd'

drbd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

2.8 在node1主節點上進行以下配置,並查看掛載信息。

[root@node1 drbd.d]# mkdir /mnt/nfs

[root@node1 drbd.d]# ssh node2.junjie.com 'mkdir /mnt/nfs'

[root@node1 drbd.d]# drbdsetup /dev/drbd0 primary -o

[root@node1 drbd.d]# mkfs.ext3 /dev/drbd0

[root@node1 drbd.d]# mount /dev/drbd0 /mnt/nfs/

 

至此DRBD配置成功!!!
三、NFS配置

兩臺服務器都修改nfs 配置文件,都修改nfs 啓動腳本,如下:

 
四、Heartbeat配置
在server1和server2做以下操作:
1、安裝Heartbeat套件

[root@node1 ~]# yum localinstall -y heartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm --nogpgcheck

[root@node1 ~]#

 

[root@node2 ~]# yum localinstall -y heartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm --nogpgcheck

[root@node2 ~]#

 
2、拷貝配置文檔

[root@node1 ~]# cd /usr/share/doc/heartbeat-2.1.4/

[root@node1 heartbeat-2.1.4]# cp authkeys ha.cf haresources /etc/ha.d/

[root@node1 heartbeat-2.1.4]# cd /etc/ha.d/

3、修改配置文檔

 

 

 

六、測試
1、在測試機上將192.168.101.210:/mnt/nfs掛到本地/data下

[root@client ~]# mkdir /data

[root@client ~]# mount 192.168.101.210:/mnt/nfs/ /data/

[root@client ~]# cd /data/

[root@client data]# ll

total 20

-rw-r--r-- 1 root root 4 Feb 8 17:41 f1

drwx------ 2 root root 16384 Feb 8 14:57 lost+found

[root@client data]# touch f-client-1

[root@client data]# ll

total 20

-rw-r--r-- 1 root root 0 Feb 8 19:50 f-client-1

-rw-r--r-- 1 root root 4 Feb 8 17:41 f1

drwx------ 2 root root 16384 Feb 8 14:57 lost+found

[root@client data]#cd

[root@client ~]#

2、在測試機上創建測試shell,每秒一次

 

3、將主節點node1 的heartbeat服務停止,則備節點node2 接管服務

[root@node1 ha.d]# service heartbeat stop

Stopping High-Availability services:

[ OK ]

[root@node1 ha.d]# drbd-overview

0:nfs Connected Secondary/Primary UpToDate/UpToDate C r----

[root@node1 ha.d]# ifconfig eth0:0

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:AE:83:D1

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:67 Base address:0x2000

 

[root@node1 ha.d]#

 

[root@node2 ha.d]# drbd-overview

0:nfs Connected Primary/Secondary UpToDate/UpToDate C r---- /mnt/nfs ext3 950M 18M 885M 2%

[root@node2 ha.d]# ifconfig eth0:0

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:D1:D4:32

inet addr:192.168.101.210 Bcast:192.168.101.254 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:67 Base address:0x2000

 

[root@node2 ha.d]#

4、在客戶端上運行nfs.sh測試文件,一直顯示的信息如下:

[root@client ~]# ./nfs.sh

---> trying touch x : Wed Feb 8 20:00:58 CST 2012

<----- done touch x : Wed Feb 8 20:00:58 CST 2012

 

---> trying touch x : Wed Feb 8 20:00:59 CST 2012

<----- done touch x : Wed Feb 8 20:00:59 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:00 CST 2012

<----- done touch x : Wed Feb 8 20:01:00 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:01 CST 2012

<----- done touch x : Wed Feb 8 20:01:01 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:02 CST 2012

<----- done touch x : Wed Feb 8 20:01:02 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:03 CST 2012

<----- done touch x : Wed Feb 8 20:01:03 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:04 CST 2012

<----- done touch x : Wed Feb 8 20:01:04 CST 2012

 

---> trying touch x : Wed Feb 8 20:01:05 CST 2012

<----- done touch x : Wed Feb 8 20:01:05 CST 2012

 
 
5、查看客戶端的掛載信息如下,磁盤可正常使用:

[root@client ~]# mount

/dev/sda2 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

nfsd on /proc/fs/nfsd type nfsd (rw)

192.168.101.210:/mnt/nfs on /data type nfs (rw,addr=192.168.101.210)

[root@client ~]#

[root@client ~]# ll /data/

total 20

-rw-r--r-- 1 root root 0 Feb 8 19:50 f-client-1

-rw-r--r-- 1 root root 4 Feb 8 17:41 f1

drwx------ 2 root root 16384 Feb 8 14:57 lost+found

[root@client ~]#

 
至此,node2 接管服務成功,實驗已實現所需的功能;也可手動在nfs掛載目錄裏建立文件,來回切換node1node2drbd服務來進行測試。
5、恢復node1爲主要節點

[root@node1 ha.d]# service heartbeat start

Starting High-Availability services:

2012/02/08_20:04:49 INFO: Resource is stopped

[ OK ]

[root@node1 ha.d]# drbd-overview

0:nfs Connected Primary/Secondary UpToDate/UpToDate C r---- /mnt/nfs ext3 950M 18M 885M 2%

[root@node1 ha.d]#

至此DRBD+Heartbeat+NFS已經實現成功!!!
《完》

--xjzhujunjie

--2012/05/08
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章