11g RAC完全安裝圖解(VM Workstation8.0+Centos5.5)01-Grid安裝

點擊打開鏈接

標籤: oracle11gRAC圖解教程
 3103人閱讀 評論(0) 收藏 舉報
 分類:

Created By Cryking 轉載請註明出處,謝謝

環境: 
VMware Workstation8.0 +Centos 5.5(32位)+Oracle 11.2.0.1.0 
兩個節點crydb01(節點1)和crydb02(節點2)

一、設置主機名及 網絡環境配置 
在節點1上: 
設置主機名

# hostname crydb01
# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=crydb01
GATEWAY=192.168.123.1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
# vi /etc/hosts
127.0.0.1               localhost.localdomain localhost
#Public IP
192.168.123.109 crydb01
192.168.123.108 crydb02
#Private IP
10.0.0.100      crydb01-pri
10.0.0.101      crydb02-pri
#VIP
192.168.123.209 crydb01-vip
192.168.123.208 crydb02-vip
#scan-ip
192.168.123.200 crydb-scan
::1             localhost6.localdomain6 localhost6

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.123.109
NETMASK=255.255.255.0
GATEWAY=192.168.123.1
HWADDR=00:0c:29:6c:59:9c
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

節點1的虛擬機添加網卡: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏寫圖片描述

添加完網卡後重啓系統並設置eth1的ip:

# reboot
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.100
NETMASK=255.255.255.0
HWADDR=00:0c:29:6c:59:a6
# service network restart
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

在節點2上:

# hostname crydb02
# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=crydb02
GATEWAY=192.168.123.1

# vi /etc/hosts
127.0.0.1               localhost.localdomain localhost
#Public IP
192.168.123.109 crydb01
192.168.123.108 crydb02
#Private IP
10.0.0.100      crydb01-pri
10.0.0.101      crydb02-pri
#VIP
192.168.123.209 crydb01-vip
192.168.123.208 crydb02-vip
#scan-ip
192.168.123.200 scan
::1             localhost6.localdomain6 localhost6

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.123.108
NETMASK=255.255.255.0
GATEWAY=192.168.123.1
HWADDR=00:0c:29:6c:59:9c
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

重複上面的添加網卡過程,然後重啓系統,然後:

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.101
NETMASK=255.255.255.0
HWADDR=00:0c:29:85:3b:3f
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

二、 用戶及用戶組設置 
分別在兩個節點上以root用戶進行如下操作:

# groupadd -g 500 oinstall
# groupadd -g 501 dba
# groupadd -g 1020 asmadmin
# groupadd -g 1022 asmoper
# groupadd -g 1021 asmdba
# useradd -u 1100 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash grid
# useradd -u 1101 -g oinstall -G dba,asmdba -d /home/oracle -s /bin/bash oracle
# passwd grid
# passwd oracle
# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle
# chown grid:oinstall /u01/app/11.2.0/grid
# chown grid:oinstall /u01/app/grid
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

三、配置用戶SSH等效性 
先配置Oracle用戶SSH等效性. 
在節點1上:

[oracle@crydb01 ~]$ su - oracle
[oracle@crydb01 ~]$ mkdir ./ssh
[oracle@crydb01 ~]$ chmod 700 ./ssh
[oracle@crydb01 ~]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
d1:85:66:a2:78:65:8c:db:81:f6:90:52:21:4b:b2:6b oracle@crydb01
[oracle@crydb01 ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
0f:10:50:57:c5:2f:06:b2:61:52:5a:61:5b:18:9c:1d oracle@crydb01
[oracle@crydb01 ssh]$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ssh]$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ~]$ chmod 600 ~/.ssh/authorized_keys
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

在節點2上進行同樣操作:

[root@crydb02 ~]# su - oracle
[oracle@crydb02 ~]$ mkdir ./ssh
[oracle@crydb02 ~]$ chmod 700 ./ssh
[oracle@crydb02 ~]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
/home/oracle/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
ff:6b:3e:b5:39:80:2b:37:88:d6:fe:cf:c6:b5:3b:ec oracle@crydb02
[oracle@crydb02 ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
/home/oracle/.ssh/id_dsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
aa:b8:46:02:60:aa:83:72:df:43:d1:1a:b6:56:a9:14 oracle@crydb02
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

再回到節點1上:

[oracle@crydb01 ~]$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ~]$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ssh]$ ssh crydb02 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
oracle@crydb02's password: 
[oracle@crydb01 ssh]$ ssh crydb02 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
oracle@crydb02's password:
[oracle@crydb01 ssh]$ scp ~/.ssh/authorized_keys crydb02:~/.ssh/authorized_keys
oracle@crydb02's password:
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

驗證在節點2上:

[oracle@crydb02 ~]$ ssh crydb01 date
[oracle@crydb02 ~]$ ssh crydb02 date
  • 1
  • 2
  • 1
  • 2

在節點1上:

[oracle@crydb01 ~]$ ssh crydb01 date
[oracle@crydb01 ~]$ ssh crydb02 date
  • 1
  • 2
  • 1
  • 2

如果都不用輸入密碼則配置成功. 
注意:如果確認已按上面的執行,但配置總是不成功的時候,請確保兩個節點的/home/oracle的權限一致,或直接使用root在所有節點執行chmod -R 700 /home/oracle.

在grid用戶下也重複進行上面的配置. 
這樣SSH用戶等效性就配置完成了.

四、 創建共享磁盤 
如圖: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏我只給了10G磁盤,勾選上立即分配所有空間,分配後點擊增加的磁盤,在右邊選擇高級設置(Advanced),如下: 
這裏寫圖片描述

關閉節點1的系統 
[root@crydb01 ~]# halt

編輯crydb01的虛擬機文件(*.vmx): 
新增的磁盤內容如下: 
… 
scsi1.present = “TRUE” 
scsi1:0.present = “TRUE” 
scsi1:0.fileName = “Red Hat Linux02_shared.vmdk” 
scsi1:0.mode = “independent-persistent” 
scsi1:0.redo = “” 
在最後加上:

diskLib.dataCacheMaxSize=0
diskLib.dataCacheMaxReadAheadSize=0
diskLib.dataCacheMinReadAheadSize=0
diskLib.dataCachePageSize=4096
diskLib.maxUnsyncedWrites="0"

disk.locking="FALSE"
scsi1.sharedBus="virtual"
scsi1.virtualDev = "lsilogic"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

啓動節點1,發現沒問題後再在節點2的虛擬機上進行如下操作: 
這裏寫圖片描述 
這裏注意選使用現有的虛擬磁盤 
這裏寫圖片描述 
這裏寫圖片描述 
然後關閉節點2的虛擬機後,進行和節點1一樣的磁盤高級設置: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏虛擬驅動代碼要和節點1的保持一致. 
然後編輯節點2的虛擬機文件(*.vmx),添加和節點1一樣的內容.

期間發生一個問題,就是新增的SCSI盤系統總是無法認到,後來在vmx文件中添加這一行後就正常了: 
scsi1.virtualDev = “lsilogic”

回到節點1,以root登錄:

[root@crydb01 ~]# fdisk -l

Disk /dev/hda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        1044     8281507+  83  Linux
/dev/hda3            1045        3002    15727635   83  Linux
/dev/hda4            3003        3263     2096482+   5  Extended
/dev/hda5            3003        3263     2096451   82  Linux swap / Solaris

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda doesn't contain a valid partition table
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

可以看到新加的磁盤/dev/sda,對它進行分區,如下:

[root@crydb01 ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): 
Using default value 1305

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1305    10482381   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@crydb01 rpm]# partprobe
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

再去到節點2上查看:

[root@crydb02 ~]# fdisk -l

Disk /dev/hda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        1044     8281507+  83  Linux
/dev/hda3            1045        3002    15727635   83  Linux
/dev/hda4            3003        3263     2096482+   5  Extended
/dev/hda5            3003        3263     2096451   82  Linux swap / Solaris

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1305    10482381   83  Linux
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

說明已經成功建立共享磁盤.

五、 集羣配置檢查 
如下檢查響應的rpm包是否安裝: 
[root@crydb01 dev]# rpm -qa | grep -i binutils-2.* 
binutils-2.17.50.0.6-14.el5 

如果沒有就需要安裝. 
安裝11g RAC需要的依賴包列表(也可在Concept上去找):

    binutils-2.17.50.0.6 
    compat-libstdc++-33-3.2.3 
    compat-libstdc++-33-3.2.3 (32 bit) 
    elfutils-libelf-0.125 
    elfutils-libelf-devel-0.125 
    elfutils-libelf-devel-static-0.125 
    gcc-4.1.2 
    gcc-c++-4.1.2 
    glibc-2.5-24 
    glibc-2.5-24 (32 bit) 
    glibc-common-2.5 
    glibc-devel-2.5 
    glibc-devel-2.5 (32 bit) 
    glibc-headers-2.5 
    ksh-20060214 
    libaio-0.3.106 
    libaio-0.3.106 (32 bit) 
    libaio-devel-0.3.106 
    libaio-devel-0.3.106 (32 bit) 
    libgcc-4.1.2 
    libgcc-4.1.2 (32 bit) 
    libstdc++-4.1.2 
    libstdc++-4.1.2 (32 bit) 
    libstdc++-devel 4.1.2 
    make-3.81 
    pdksh-5.2.14 
    sysstat-7.0.2 
    unixODBC-2.2.11 
    unixODBC-2.2.11 (32 bit) 
    unixODBC-devel-2.2.11 
    unixODBC-devel-2.2.11 (32 bit) 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

在節點1上執行檢查:

[root@crydb01 oracle]# su - grid
[grid@crydb01 grid]$ ./grid/runcluvfy.sh stage -pre crsinst -n crydb01,crydb02 -fixup -verbose
Performing pre-checks for cluster services setup 

Checking node reachability...

Check: Node reachability from node "crydb01"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  crydb01                               yes                     
  crydb02                               yes                     
Result: Node reachability check passed from node "crydb01"
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

仔細檢查各個未通過的項目並解決. 
注:如兩個節點時間不同步,則可能出現 
PRVF-5415 : Check to see if NTP daemon is running failed 
在兩個節點上啓動ntpd服務: service ntpd start 
如果還出現: 
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option 
則分別在兩個節點上編輯文件如下(添加-x):

# vi /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

這裏我有個Result: Swap space check failed,不用理它. 
驗證OS和硬件:

$ ./runcluvfy.sh stage -post hwos -n crydb01,crydb02 -verbose
...
Post-check for hardware and operating system setup was successful
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

六、用戶環境配置 
在兩個節點上關閉防火牆: 
/etc/rc.d/init.d/iptables stop

分別在兩個節點上:

# su - grid
$ vi .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
ORACLE_SID=+ASM1;export ORACLE_SID
GRID_BASE=/u01/app/grid;export GRID_BASE
ORACLE_BASE=$GRID_BASE;export ORACLE_BASE

GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_HOME=$GRID_HOME;export ORACLE_HOME

NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";export NLS_DATE_FORMAT

TNS_ADMIN=$GRID_HOME/network/admin;export TNS_ADMIN

CLASSPATH=$GRID_HOME/JRE
CLASSPATH=${CLASSPATH}:$GRID_HOME/jdbc/lib/ojdbc6.jar
CLASSPATH=${CLASSPATH}:$GRID_HOME/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/network/jlib
export CLASSPATH

export TEMP=/tmp
export TMPDIR=/tmp
# User specific environment and startup programs

PATH=$GRID_HOME/bin:$PATH:$HOME/bin

export PATH

umask 022

:wq

$ su - oracle
$ vi .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
ORACLE_UNQNAME=crydb;export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_SID=crydb01;export ORACLE_SID
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1;export ORACLE_HOME
ORACLE_HOME_LISTNER=$ORACLE_HOME;export ORACLE_LISTNER
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
ORACLE_TRACE=$ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace;export ORACLE_TRACE
PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin

export PATH

alias sqlplus='rlwrap sqlplus'

export TEMP=/tmp
export TMPDIR=/tmp
umask 022

:wq
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

設置資源限制,分別在兩個節點下添加:

[root@crydb01 /]# vi /etc/security/limits.conf
# Oracle
oracle              soft    nproc   2047
oracle              hard    nproc   16384
oracle              soft    nofile  1024
oracle              hard    nofile  65536
oracle              soft    stack   10240
# Grid
grid              soft    nproc   2047
grid              hard    nproc   16384
grid              soft    nofile  1024
grid              hard    nofile  65536
grid              soft    stack   10240
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

在/etc/profile文件最後加上:

# Oracle
if [ $USER = "oracle" ]|| [ $USER = "grid" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
      ulimit -p 16384
      ulimit -n 65536
   else
      ulimit -u 16384 -n 65536
   fi
   umask 022
fi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

七、爲ASM安裝、配置ASMLib 
提示:配置ASM不一定需要ASMLib,也可使用標準Linux I/O調用來管理裸設備. 
用root在兩個節點上執行:

[root@crydb01 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm \
> oracleasmlib-2.0.4-1.el5.i386.rpm \
> oracleasm-support-2.1.7-1.el5.i386.rpm
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

驗證是否成功安裝:

[root@crydb01 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort
oracleasm-2.6.18-194.el5-2.0.5-1.el5 (i686)
oracleasmlib-2.0.4-1.el5 (i386)
oracleasm-support-2.1.7-1.el5 (i386)
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

配置ASMLib:

[root@crydb01 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@crydb01 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

創建ASM磁盤(只在節點1執行):

[root@crydb01 ~]# /usr/sbin/oracleasm createdisk DISK1 /dev/sda1
Writing disk header: done
Instantiating disk: done
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

在節點2上執行掃描磁盤:

[root@crydb02 ~]#  /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK1"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

查看下當前的ASM磁盤(分別在兩個節點上執行):

[root@crydb01 ~]# /usr/sbin/oracleasm listdisks
DISK1
  • 1
  • 2
  • 1
  • 2

八、正式安裝Grid 
下面開始安裝grid和ASM(只在節點1上執行): 
在此之前需要保證X11遠程顯示正常(DISPLAY設置正確),我這裏是直接使用的Xshell,不用配置

[grid@crydb01 ~]$ cd /home/grid/grid
[grid@crydb01 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 3283 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-02-12_11-10-08PM. Please wait ...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

1 
2 
3 
4 
注意上圖中的SCAN Name要和hosts中的一致 
5 
下面一步是添加節點並配置SSH的連通性: 
6 
7 
上圖中點擊”Setup”. 
8 
SSH連通性建立成功: 
9 
10 
11 
因爲只創建了一塊共享磁盤,所以選外部(External) 
12 
13 
14 
15 
16 
下面是先決條件檢查: 
17

這裏我的ntpd有問題,重啓的時候報了: 
ntpd: Synchronizing with time server: [FAILED] 
後來發現是DNS配置(/etc/resolv.conf)有問題,改好後就正常了.

還有個錯誤(是dba組不是grid的主組導致),點擊”Fix & Check Again”,找到修復腳本路徑,以root登錄執行下就可以了–文中上面的腳本已修改,應不會出現此錯誤了. 
如我這裏是:

[root@crydb01 ntp]# cd /tmp/CVU_11.2.0.1.0_grid/
[root@crydb01 CVU_11.2.0.1.0_grid]# ./runfixup.sh 
Response file being used is :./fixup.response
Enable file being used is :./fixup.enable
Log file location: ./orarun.log
uid=1100(grid) gid=500(oinstall) groups=500(oinstall),501(dba),1020(asmadmin),1022(asmoper),1021(asmdba)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

18 
這裏可以保存響應文件,方便後面做靜默安裝使用.

開始漫長的安裝了…

19 
20 
分別在節點1、2上以root執行圖中提示的腳本:

[root@crydb01 CVU_11.2.0.1.0_grid]# cd /u01/app/11.2.0/grid/
[root@crydb01 grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y         
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-02-13 00:38:49: Parsing the host name
2015-02-13 00:38:49: Checking for super user privileges
2015-02-13 00:38:49: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

CRS-2672: Attempting to start 'ora.gipcd' on 'crydb01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'crydb01'
CRS-2676: Start of 'ora.gipcd' on 'crydb01' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'crydb01'
CRS-2676: Start of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'crydb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'crydb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'crydb01'
CRS-2676: Start of 'ora.diskmon' on 'crydb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'crydb01'
CRS-2676: Start of 'ora.ctssd' on 'crydb01' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'crydb01'
CRS-2676: Start of 'ora.crsd' on 'crydb01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 23c34f816b8a4ffabf6414ab4123ef3d.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   23c34f816b8a4ffabf6414ab4123ef3d (ORCL:DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'crydb01'
CRS-2677: Stop of 'ora.crsd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'crydb01'
CRS-2677: Stop of 'ora.asm' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'crydb01'
CRS-2677: Stop of 'ora.ctssd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'crydb01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'crydb01'
CRS-2677: Stop of 'ora.cssd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'crydb01'
CRS-2677: Stop of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'crydb01'
CRS-2677: Stop of 'ora.gipcd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'crydb01'
CRS-2677: Stop of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'crydb01'
CRS-2676: Start of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'crydb01'
CRS-2676: Start of 'ora.gipcd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'crydb01'
CRS-2676: Start of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'crydb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'crydb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'crydb01'
CRS-2676: Start of 'ora.diskmon' on 'crydb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'crydb01'
CRS-2676: Start of 'ora.ctssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'crydb01'
CRS-2676: Start of 'ora.asm' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'crydb01'
CRS-2676: Start of 'ora.crsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'crydb01'
CRS-2676: Start of 'ora.evmd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'crydb01'
CRS-2676: Start of 'ora.asm' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'crydb01'
CRS-2676: Start of 'ora.DATA.dg' on 'crydb01' succeeded

crydb01     2015/02/13 00:46:02     /u01/app/11.2.0/grid/cdata/crydb01/backup_20150213_004602.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141

在最後出現了這個錯誤: 
21 
查看日誌:

...
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "crydb-scan"
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "crydb-scan" (IP address: 192.168.123.200) failed
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "crydb-scan"
INFO: Verification of SCAN VIP and Listener setup failed
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

看來是scan-ip解析的問題.原因可能是使用了hosts解析scan-ip而不是使用oracle推薦的DNS或GNS;或者配置了DNS但nslookup crydb-scan不通. 
由於我這裏在兩個節點上ping crydb-scan是通的:

[root@crydb01 grid]# ping crydb-scan
PING crydb-scan (192.168.123.200) 56(84) bytes of data.
64 bytes from crydb-scan (192.168.123.200): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from crydb-scan (192.168.123.200): icmp_seq=2 ttl=64 time=0.018 ms
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

故可忽略此錯誤. 
至此11g Grid安裝完成:

九、安裝Grid後的檢查和腳本備份 
安裝完成後,檢查crs(Cluster Ready Services)狀態:

[grid@crydb01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

查看集羣節點:

[grid@crydb01 ~]$ olsnodes -n
crydb01 1
crydb02 2
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

查看ASM運行狀態:

[grid@crydb01 ~]$ srvctl status asm -a
ASM is running on crydb01,crydb02
ASM is enabled.
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

檢查OCR(Oracle Cluster Registry):

[grid@crydb01 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
     Version                  :          3
     Total space (kbytes)     :     262120
     Used space (kbytes)      :       2252
     Available space (kbytes) :     259868
     ID                       :  677623650
     Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

     Cluster registry integrity check succeeded

     Logical corruption check bypassed due to non-privileged user
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

檢查表決磁盤:

[grid@crydb01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   23c34f816b8a4ffabf6414ab4123ef3d (ORCL:DISK1) [DATA]
Located 1 voting disk(s).
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

一切正常. 
備份root.sh腳本:

[root@crydb01 ~]# cp /u01/app/11.2.0/grid/root.sh ~/root.sh.crydb01.150213
[root@crydb02 ~]# cp /u01/app/11.2.0/grid/root.sh ~/root.sh.crydb02.150213
  • 1
  • 2
  • 1
  • 2

至此,Grid安裝全部完成,下一篇將在此基礎上安裝Oracle數據庫.


標籤: oracle11gRAC圖解教程
 3103人閱讀 評論(0) 收藏 舉報
 分類:

Created By Cryking 轉載請註明出處,謝謝

環境: 
VMware Workstation8.0 +Centos 5.5(32位)+Oracle 11.2.0.1.0 
兩個節點crydb01(節點1)和crydb02(節點2)

一、設置主機名及 網絡環境配置 
在節點1上: 
設置主機名

# hostname crydb01
# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=crydb01
GATEWAY=192.168.123.1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
# vi /etc/hosts
127.0.0.1               localhost.localdomain localhost
#Public IP
192.168.123.109 crydb01
192.168.123.108 crydb02
#Private IP
10.0.0.100      crydb01-pri
10.0.0.101      crydb02-pri
#VIP
192.168.123.209 crydb01-vip
192.168.123.208 crydb02-vip
#scan-ip
192.168.123.200 crydb-scan
::1             localhost6.localdomain6 localhost6

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.123.109
NETMASK=255.255.255.0
GATEWAY=192.168.123.1
HWADDR=00:0c:29:6c:59:9c
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

節點1的虛擬機添加網卡: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏寫圖片描述

添加完網卡後重啓系統並設置eth1的ip:

# reboot
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.100
NETMASK=255.255.255.0
HWADDR=00:0c:29:6c:59:a6
# service network restart
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

在節點2上:

# hostname crydb02
# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=crydb02
GATEWAY=192.168.123.1

# vi /etc/hosts
127.0.0.1               localhost.localdomain localhost
#Public IP
192.168.123.109 crydb01
192.168.123.108 crydb02
#Private IP
10.0.0.100      crydb01-pri
10.0.0.101      crydb02-pri
#VIP
192.168.123.209 crydb01-vip
192.168.123.208 crydb02-vip
#scan-ip
192.168.123.200 scan
::1             localhost6.localdomain6 localhost6

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.123.108
NETMASK=255.255.255.0
GATEWAY=192.168.123.1
HWADDR=00:0c:29:6c:59:9c
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

重複上面的添加網卡過程,然後重啓系統,然後:

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.101
NETMASK=255.255.255.0
HWADDR=00:0c:29:85:3b:3f
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

二、 用戶及用戶組設置 
分別在兩個節點上以root用戶進行如下操作:

# groupadd -g 500 oinstall
# groupadd -g 501 dba
# groupadd -g 1020 asmadmin
# groupadd -g 1022 asmoper
# groupadd -g 1021 asmdba
# useradd -u 1100 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash grid
# useradd -u 1101 -g oinstall -G dba,asmdba -d /home/oracle -s /bin/bash oracle
# passwd grid
# passwd oracle
# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle
# chown grid:oinstall /u01/app/11.2.0/grid
# chown grid:oinstall /u01/app/grid
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

三、配置用戶SSH等效性 
先配置Oracle用戶SSH等效性. 
在節點1上:

[oracle@crydb01 ~]$ su - oracle
[oracle@crydb01 ~]$ mkdir ./ssh
[oracle@crydb01 ~]$ chmod 700 ./ssh
[oracle@crydb01 ~]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
d1:85:66:a2:78:65:8c:db:81:f6:90:52:21:4b:b2:6b oracle@crydb01
[oracle@crydb01 ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
0f:10:50:57:c5:2f:06:b2:61:52:5a:61:5b:18:9c:1d oracle@crydb01
[oracle@crydb01 ssh]$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ssh]$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ~]$ chmod 600 ~/.ssh/authorized_keys
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

在節點2上進行同樣操作:

[root@crydb02 ~]# su - oracle
[oracle@crydb02 ~]$ mkdir ./ssh
[oracle@crydb02 ~]$ chmod 700 ./ssh
[oracle@crydb02 ~]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
/home/oracle/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
ff:6b:3e:b5:39:80:2b:37:88:d6:fe:cf:c6:b5:3b:ec oracle@crydb02
[oracle@crydb02 ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
/home/oracle/.ssh/id_dsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
aa:b8:46:02:60:aa:83:72:df:43:d1:1a:b6:56:a9:14 oracle@crydb02
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

再回到節點1上:

[oracle@crydb01 ~]$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ~]$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
[oracle@crydb01 ssh]$ ssh crydb02 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
oracle@crydb02's password: 
[oracle@crydb01 ssh]$ ssh crydb02 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
oracle@crydb02's password:
[oracle@crydb01 ssh]$ scp ~/.ssh/authorized_keys crydb02:~/.ssh/authorized_keys
oracle@crydb02's password:
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

驗證在節點2上:

[oracle@crydb02 ~]$ ssh crydb01 date
[oracle@crydb02 ~]$ ssh crydb02 date
  • 1
  • 2
  • 1
  • 2

在節點1上:

[oracle@crydb01 ~]$ ssh crydb01 date
[oracle@crydb01 ~]$ ssh crydb02 date
  • 1
  • 2
  • 1
  • 2

如果都不用輸入密碼則配置成功. 
注意:如果確認已按上面的執行,但配置總是不成功的時候,請確保兩個節點的/home/oracle的權限一致,或直接使用root在所有節點執行chmod -R 700 /home/oracle.

在grid用戶下也重複進行上面的配置. 
這樣SSH用戶等效性就配置完成了.

四、 創建共享磁盤 
如圖: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏我只給了10G磁盤,勾選上立即分配所有空間,分配後點擊增加的磁盤,在右邊選擇高級設置(Advanced),如下: 
這裏寫圖片描述

關閉節點1的系統 
[root@crydb01 ~]# halt

編輯crydb01的虛擬機文件(*.vmx): 
新增的磁盤內容如下: 
… 
scsi1.present = “TRUE” 
scsi1:0.present = “TRUE” 
scsi1:0.fileName = “Red Hat Linux02_shared.vmdk” 
scsi1:0.mode = “independent-persistent” 
scsi1:0.redo = “” 
在最後加上:

diskLib.dataCacheMaxSize=0
diskLib.dataCacheMaxReadAheadSize=0
diskLib.dataCacheMinReadAheadSize=0
diskLib.dataCachePageSize=4096
diskLib.maxUnsyncedWrites="0"

disk.locking="FALSE"
scsi1.sharedBus="virtual"
scsi1.virtualDev = "lsilogic"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

啓動節點1,發現沒問題後再在節點2的虛擬機上進行如下操作: 
這裏寫圖片描述 
這裏注意選使用現有的虛擬磁盤 
這裏寫圖片描述 
這裏寫圖片描述 
然後關閉節點2的虛擬機後,進行和節點1一樣的磁盤高級設置: 
這裏寫圖片描述 
這裏寫圖片描述 
這裏虛擬驅動代碼要和節點1的保持一致. 
然後編輯節點2的虛擬機文件(*.vmx),添加和節點1一樣的內容.

期間發生一個問題,就是新增的SCSI盤系統總是無法認到,後來在vmx文件中添加這一行後就正常了: 
scsi1.virtualDev = “lsilogic”

回到節點1,以root登錄:

[root@crydb01 ~]# fdisk -l

Disk /dev/hda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        1044     8281507+  83  Linux
/dev/hda3            1045        3002    15727635   83  Linux
/dev/hda4            3003        3263     2096482+   5  Extended
/dev/hda5            3003        3263     2096451   82  Linux swap / Solaris

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda doesn't contain a valid partition table
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

可以看到新加的磁盤/dev/sda,對它進行分區,如下:

[root@crydb01 ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): 
Using default value 1305

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1305    10482381   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@crydb01 rpm]# partprobe
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

再去到節點2上查看:

[root@crydb02 ~]# fdisk -l

Disk /dev/hda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        1044     8281507+  83  Linux
/dev/hda3            1045        3002    15727635   83  Linux
/dev/hda4            3003        3263     2096482+   5  Extended
/dev/hda5            3003        3263     2096451   82  Linux swap / Solaris

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        1305    10482381   83  Linux
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

說明已經成功建立共享磁盤.

五、 集羣配置檢查 
如下檢查響應的rpm包是否安裝: 
[root@crydb01 dev]# rpm -qa | grep -i binutils-2.* 
binutils-2.17.50.0.6-14.el5 

如果沒有就需要安裝. 
安裝11g RAC需要的依賴包列表(也可在Concept上去找):

    binutils-2.17.50.0.6 
    compat-libstdc++-33-3.2.3 
    compat-libstdc++-33-3.2.3 (32 bit) 
    elfutils-libelf-0.125 
    elfutils-libelf-devel-0.125 
    elfutils-libelf-devel-static-0.125 
    gcc-4.1.2 
    gcc-c++-4.1.2 
    glibc-2.5-24 
    glibc-2.5-24 (32 bit) 
    glibc-common-2.5 
    glibc-devel-2.5 
    glibc-devel-2.5 (32 bit) 
    glibc-headers-2.5 
    ksh-20060214 
    libaio-0.3.106 
    libaio-0.3.106 (32 bit) 
    libaio-devel-0.3.106 
    libaio-devel-0.3.106 (32 bit) 
    libgcc-4.1.2 
    libgcc-4.1.2 (32 bit) 
    libstdc++-4.1.2 
    libstdc++-4.1.2 (32 bit) 
    libstdc++-devel 4.1.2 
    make-3.81 
    pdksh-5.2.14 
    sysstat-7.0.2 
    unixODBC-2.2.11 
    unixODBC-2.2.11 (32 bit) 
    unixODBC-devel-2.2.11 
    unixODBC-devel-2.2.11 (32 bit) 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

在節點1上執行檢查:

[root@crydb01 oracle]# su - grid
[grid@crydb01 grid]$ ./grid/runcluvfy.sh stage -pre crsinst -n crydb01,crydb02 -fixup -verbose
Performing pre-checks for cluster services setup 

Checking node reachability...

Check: Node reachability from node "crydb01"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  crydb01                               yes                     
  crydb02                               yes                     
Result: Node reachability check passed from node "crydb01"
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

仔細檢查各個未通過的項目並解決. 
注:如兩個節點時間不同步,則可能出現 
PRVF-5415 : Check to see if NTP daemon is running failed 
在兩個節點上啓動ntpd服務: service ntpd start 
如果還出現: 
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option 
則分別在兩個節點上編輯文件如下(添加-x):

# vi /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no

# Additional options for ntpdate
NTPDATE_OPTIONS=""
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

這裏我有個Result: Swap space check failed,不用理它. 
驗證OS和硬件:

$ ./runcluvfy.sh stage -post hwos -n crydb01,crydb02 -verbose
...
Post-check for hardware and operating system setup was successful
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

六、用戶環境配置 
在兩個節點上關閉防火牆: 
/etc/rc.d/init.d/iptables stop

分別在兩個節點上:

# su - grid
$ vi .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
ORACLE_SID=+ASM1;export ORACLE_SID
GRID_BASE=/u01/app/grid;export GRID_BASE
ORACLE_BASE=$GRID_BASE;export ORACLE_BASE

GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
ORACLE_HOME=$GRID_HOME;export ORACLE_HOME

NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";export NLS_DATE_FORMAT

TNS_ADMIN=$GRID_HOME/network/admin;export TNS_ADMIN

CLASSPATH=$GRID_HOME/JRE
CLASSPATH=${CLASSPATH}:$GRID_HOME/jdbc/lib/ojdbc6.jar
CLASSPATH=${CLASSPATH}:$GRID_HOME/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/network/jlib
export CLASSPATH

export TEMP=/tmp
export TMPDIR=/tmp
# User specific environment and startup programs

PATH=$GRID_HOME/bin:$PATH:$HOME/bin

export PATH

umask 022

:wq

$ su - oracle
$ vi .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
ORACLE_UNQNAME=crydb;export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
ORACLE_SID=crydb01;export ORACLE_SID
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1;export ORACLE_HOME
ORACLE_HOME_LISTNER=$ORACLE_HOME;export ORACLE_LISTNER
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin;export TNS_ADMIN
ORACLE_TRACE=$ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace;export ORACLE_TRACE
PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin

export PATH

alias sqlplus='rlwrap sqlplus'

export TEMP=/tmp
export TMPDIR=/tmp
umask 022

:wq
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

設置資源限制,分別在兩個節點下添加:

[root@crydb01 /]# vi /etc/security/limits.conf
# Oracle
oracle              soft    nproc   2047
oracle              hard    nproc   16384
oracle              soft    nofile  1024
oracle              hard    nofile  65536
oracle              soft    stack   10240
# Grid
grid              soft    nproc   2047
grid              hard    nproc   16384
grid              soft    nofile  1024
grid              hard    nofile  65536
grid              soft    stack   10240
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

在/etc/profile文件最後加上:

# Oracle
if [ $USER = "oracle" ]|| [ $USER = "grid" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
      ulimit -p 16384
      ulimit -n 65536
   else
      ulimit -u 16384 -n 65536
   fi
   umask 022
fi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

七、爲ASM安裝、配置ASMLib 
提示:配置ASM不一定需要ASMLib,也可使用標準Linux I/O調用來管理裸設備. 
用root在兩個節點上執行:

[root@crydb01 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm \
> oracleasmlib-2.0.4-1.el5.i386.rpm \
> oracleasm-support-2.1.7-1.el5.i386.rpm
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

驗證是否成功安裝:

[root@crydb01 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort
oracleasm-2.6.18-194.el5-2.0.5-1.el5 (i686)
oracleasmlib-2.0.4-1.el5 (i386)
oracleasm-support-2.1.7-1.el5 (i386)
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

配置ASMLib:

[root@crydb01 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@crydb01 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

創建ASM磁盤(只在節點1執行):

[root@crydb01 ~]# /usr/sbin/oracleasm createdisk DISK1 /dev/sda1
Writing disk header: done
Instantiating disk: done
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

在節點2上執行掃描磁盤:

[root@crydb02 ~]#  /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK1"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

查看下當前的ASM磁盤(分別在兩個節點上執行):

[root@crydb01 ~]# /usr/sbin/oracleasm listdisks
DISK1
  • 1
  • 2
  • 1
  • 2

八、正式安裝Grid 
下面開始安裝grid和ASM(只在節點1上執行): 
在此之前需要保證X11遠程顯示正常(DISPLAY設置正確),我這裏是直接使用的Xshell,不用配置

[grid@crydb01 ~]$ cd /home/grid/grid
[grid@crydb01 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 3283 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-02-12_11-10-08PM. Please wait ...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

1 
2 
3 
4 
注意上圖中的SCAN Name要和hosts中的一致 
5 
下面一步是添加節點並配置SSH的連通性: 
6 
7 
上圖中點擊”Setup”. 
8 
SSH連通性建立成功: 
9 
10 
11 
因爲只創建了一塊共享磁盤,所以選外部(External) 
12 
13 
14 
15 
16 
下面是先決條件檢查: 
17

這裏我的ntpd有問題,重啓的時候報了: 
ntpd: Synchronizing with time server: [FAILED] 
後來發現是DNS配置(/etc/resolv.conf)有問題,改好後就正常了.

還有個錯誤(是dba組不是grid的主組導致),點擊”Fix & Check Again”,找到修復腳本路徑,以root登錄執行下就可以了–文中上面的腳本已修改,應不會出現此錯誤了. 
如我這裏是:

[root@crydb01 ntp]# cd /tmp/CVU_11.2.0.1.0_grid/
[root@crydb01 CVU_11.2.0.1.0_grid]# ./runfixup.sh 
Response file being used is :./fixup.response
Enable file being used is :./fixup.enable
Log file location: ./orarun.log
uid=1100(grid) gid=500(oinstall) groups=500(oinstall),501(dba),1020(asmadmin),1022(asmoper),1021(asmdba)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

18 
這裏可以保存響應文件,方便後面做靜默安裝使用.

開始漫長的安裝了…

19 
20 
分別在節點1、2上以root執行圖中提示的腳本:

[root@crydb01 CVU_11.2.0.1.0_grid]# cd /u01/app/11.2.0/grid/
[root@crydb01 grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y         
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-02-13 00:38:49: Parsing the host name
2015-02-13 00:38:49: Checking for super user privileges
2015-02-13 00:38:49: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

CRS-2672: Attempting to start 'ora.gipcd' on 'crydb01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'crydb01'
CRS-2676: Start of 'ora.gipcd' on 'crydb01' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'crydb01'
CRS-2676: Start of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'crydb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'crydb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'crydb01'
CRS-2676: Start of 'ora.diskmon' on 'crydb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'crydb01'
CRS-2676: Start of 'ora.ctssd' on 'crydb01' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'crydb01'
CRS-2676: Start of 'ora.crsd' on 'crydb01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 23c34f816b8a4ffabf6414ab4123ef3d.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   23c34f816b8a4ffabf6414ab4123ef3d (ORCL:DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'crydb01'
CRS-2677: Stop of 'ora.crsd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'crydb01'
CRS-2677: Stop of 'ora.asm' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'crydb01'
CRS-2677: Stop of 'ora.ctssd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'crydb01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'crydb01'
CRS-2677: Stop of 'ora.cssd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'crydb01'
CRS-2677: Stop of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'crydb01'
CRS-2677: Stop of 'ora.gipcd' on 'crydb01' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'crydb01'
CRS-2677: Stop of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'crydb01'
CRS-2676: Start of 'ora.mdnsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'crydb01'
CRS-2676: Start of 'ora.gipcd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'crydb01'
CRS-2676: Start of 'ora.gpnpd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'crydb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'crydb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'crydb01'
CRS-2676: Start of 'ora.diskmon' on 'crydb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'crydb01'
CRS-2676: Start of 'ora.ctssd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'crydb01'
CRS-2676: Start of 'ora.asm' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'crydb01'
CRS-2676: Start of 'ora.crsd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'crydb01'
CRS-2676: Start of 'ora.evmd' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'crydb01'
CRS-2676: Start of 'ora.asm' on 'crydb01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'crydb01'
CRS-2676: Start of 'ora.DATA.dg' on 'crydb01' succeeded

crydb01     2015/02/13 00:46:02     /u01/app/11.2.0/grid/cdata/crydb01/backup_20150213_004602.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141

在最後出現了這個錯誤: 
21 
查看日誌:

...
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "crydb-scan"
INFO: ERROR: 
INFO: PRVF-4657 : Name resolution setup check for "crydb-scan" (IP address: 192.168.123.200) failed
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "crydb-scan"
INFO: Verification of SCAN VIP and Listener setup failed
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

看來是scan-ip解析的問題.原因可能是使用了hosts解析scan-ip而不是使用oracle推薦的DNS或GNS;或者配置了DNS但nslookup crydb-scan不通. 
由於我這裏在兩個節點上ping crydb-scan是通的:

[root@crydb01 grid]# ping crydb-scan
PING crydb-scan (192.168.123.200) 56(84) bytes of data.
64 bytes from crydb-scan (192.168.123.200): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from crydb-scan (192.168.123.200): icmp_seq=2 ttl=64 time=0.018 ms
...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

故可忽略此錯誤. 
至此11g Grid安裝完成:

九、安裝Grid後的檢查和腳本備份 
安裝完成後,檢查crs(Cluster Ready Services)狀態:

[grid@crydb01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

查看集羣節點:

[grid@crydb01 ~]$ olsnodes -n
crydb01 1
crydb02 2
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

查看ASM運行狀態:

[grid@crydb01 ~]$ srvctl status asm -a
ASM is running on crydb01,crydb02
ASM is enabled.
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

檢查OCR(Oracle Cluster Registry):

[grid@crydb01 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
     Version                  :          3
     Total space (kbytes)     :     262120
     Used space (kbytes)      :       2252
     Available space (kbytes) :     259868
     ID                       :  677623650
     Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

     Cluster registry integrity check succeeded

     Logical corruption check bypassed due to non-privileged user
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

檢查表決磁盤:

[grid@crydb01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   23c34f816b8a4ffabf6414ab4123ef3d (ORCL:DISK1) [DATA]
Located 1 voting disk(s).
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

一切正常. 
備份root.sh腳本:

[root@crydb01 ~]# cp /u01/app/11.2.0/grid/root.sh ~/root.sh.crydb01.150213
[root@crydb02 ~]# cp /u01/app/11.2.0/grid/root.sh ~/root.sh.crydb02.150213
  • 1
  • 2
  • 1
  • 2

至此,Grid安裝全部完成,下一篇將在此基礎上安裝Oracle數據庫.

發佈了1 篇原創文章 · 獲贊 15 · 訪問量 15萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章