SUN Zone Cluster安裝及配置說明之三

3. 創建ZFS存儲池以及ZFS文件系統

在aptest創建zpool

bash-3.00# zpool create erpapppool c1t1d0

bash-3.00# zpool create erpdbpool c1t2d0

在aptest,將erpdbpool export

bash-3.00# zpool export erpdbpool

在dbtest,將erpdbpool import

bash-3.00# zpool import erpdbpool

查看aptest zpool list

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpdbpool 79.5G 78K 79.5G 0% ONLINE -

rpool 79.5G 6.17G 73.3G 7% ONLINE -

查看dbtest zpool list

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 79.5G 76.5K 79.5G 0% ONLINE -

rpool 80.5G 6.47G 74.0G 8% ONLINE -

創建目錄

bash-3.00# mkdir zonedir

bash-3.00# cd zonedir

bash-3.00# pwd

/zonedir

bash-3.00# mkdir erpapp

創建掛接點

bash-3.00# zfs create -o mountpoint=/zonedir/erpapp erpapppool/erpapp_fs

bash-3.00# zfs create -o mountpoint=/zonedir/erpdb erpdbpool/erpdb_fs

再查看zfs list

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 112K 78.3G 21K /erpapppool

erpapppool/erpapp_fs 21K 78.3G 21K /zonedir/erpapp

rpool 7.00G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 0 - 33.5K -

rpool/ROOT 5.46G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.46G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 70.6M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 62K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 0 - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.8G 6.38M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 0 - 6.38M -

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 79.5G 117K 79.5G 0% ONLINE -

rpool 80.5G 6.47G 74.0G 8% ONLINE -

開始創建zone

bash-3.00# zonecfg -z erpapp

erpapp: No such zone configured

Use 'create' to begin configuring a new zone.

zonecfg:erpapp> create

zonecfg:erpapp> set zonepath=/zonedir/erpapp

zonecfg:erpapp> set autoboot=false

zonecfg:erpapp> remove inherit-pkg-dir dir=/lib

zonecfg:erpapp> remove inherit-pkg-dir dir=/platform

zonecfg:erpapp> remove inherit-pkg-dir dir=/sbin

zonecfg:erpapp> remove inherit-pkg-dir dir=/usr

zonecfg:erpapp> add net

zonecfg:erpapp:net> set address=192.168.0.42

zonecfg:erpapp:net> set physical=e1000g0

zonecfg:erpapp:net> set defrouter=192.168.0.1

zonecfg:erpapp:net> end

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

zonecfg:erpapp> verify

zonecfg:erpapp> commit

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

添加內存限制:

zonecfg:erpapp> add capped-memory

zonecfg:erpapp:capped-memory> set physical=1.5G

zonecfg:erpapp:capped-memory> set swap=1.5G

zonecfg:erpapp:capped-memory> end

zonecfg:erpapp> info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

capped-memory:

physical: 1.5G

[swap: 1.5G]

rctl:

name: zone.max-swap

value: (priv=privileged,limit=1610612736,action=deny)

zonecfg:erpapp> commit

zonecfg:erpapp> exit

bash-3.00# zonecfg -z erpapp info

zonename: erpapp

zonepath: /zonedir/erpapp

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

hostid:

net:

address: 192.168.0.42

physical: e1000g0

defrouter: 192.168.0.1

zone erpapp創建成功,如下

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp configured /zonedir/erpapp native shared

如下進入zone的setup

bash-3.00# zoneadm -z erpapp install

/zonedir/erpapp must not be group readable.

/zonedir/erpapp must not be group executable.

/zonedir/erpapp must not be world readable.

/zonedir/erpapp must not be world executable.

could not verify zonepath /zonedir/erpapp because of the above errors.

zoneadm: zone erpapp failed to verify

此錯誤爲授權,需要修改如下

bash-3.00# chmod 700 erpapp

bash-3.00# ls -lrt

total 6

drwx------ 5 root root 5 Apr 12 10:16 erpdb

drwx------ 2 root root 2 Apr 12 17:25 erpapp

bash-3.00# zoneadm -z erpapp install

Preparing to install zone <erpapp>.

Creating list of files to copy from the global zone.

Copying <169112> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <1388> packages on the zone.

Initialized <1287> packages on zone.

Zone <erpapp> is initialized.

Installation of these packages generated errors: <SUNWpostgr-82-libs SUNWpostgr-83-server-data-root SUNWpostgr-82-server-data-root SUNWpostgr-82-client SUNWpostgr-82-server SUNWpostgr-82-contrib SUNWpostgr-82-devel>

Installation of <1> packages was skipped.

The file </zonedir/erpapp/root/var/sadm/system/logs/install_log> contains a log of the zone installation.

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zoneadm -z erpapp boot

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 erpapp running /zonedir/erpapp native shared

bash-3.00# zlogin -C erpapp

[Connected to zone 'erpapp' console]

168/168

Reading ZFS config: done.

Select a Language

0. English

1. Simplified Chinese

Please make a choice (0 - 1), or press h or ? for help: 0

Select a Locale

0. English (C - 7-bit ASCII)

1. U.S.A. (UTF-8)

2. Go Back to Previous Screen

Please make a choice (0 - 2), or press h or ? for help: 0

What type of terminal are you using?

1) ANSI Standard CRT

2) DEC VT52

3) DEC VT100

4) Heathkit 19

5) Lear Siegler ADM31

6) PC Console

7) Sun Command Tool

8) Sun Workstation

9) Televideo 910

10) Televideo 925

11) Wyse Model 50

12) X Terminal Emulator (xterms)

13) CDE Terminal Emulator (dtterm)

14) Other

Type the number of your choice and press Return: 3

Creating new rsa public/private host key pair

Creating new dsa public/private host key pair

Configuring network interface addresses: e1000g0.

q Host Name for e1000g0:1 qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Enter the host name which identifies this system on the network. The name

must be unique within your domain; creating a duplicate host name will cause

problems on the network after you install Solaris.

A host name must have at least one character; it can contain letters,

digits, and minus signs (-).

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information for e1000g0:1 qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Just a moment...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Configure Security Policy: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Specify Yes if the system will use the Kerberos security mechanism.

Specify No if this system will use standard UNIX security.

Configure Kerberos Security

qqqqqqqqqqqqqqqqqqqqqqqqqqq

[ ] Yes

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please wait...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Name Service qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

On this screen you must provide name service information. Select the name

service that will be used by this system, or None if your system will either

not use a name service at all, or if it will use a name service not listed

here.

&gt; To make a selection, use the arrow keys to highlight the option

and press Return to mark it [X].

Name service

qqqqqqqqqqqq

[X] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

[ ] None

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

[X] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

[ ] NIS+

[ ] NIS

[ ] DNS

[ ] LDAP

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Just a moment...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q NFSv4 Domain Name qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

NFS version 4 uses a domain name that is automatically derived from the

system's naming services. The derived domain name is sufficient for most

configurations. In a few cases, mounts that cross domain boundaries might

cause files to appear to be owned by "nobody" due to the lack of a common

domain name.

The current NFSv4 default domain is: ""

NFSv4 Domain Configuration

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

[X] Use the NFSv4 domain derived by the system

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Confirm Information for NFSv4 Domain qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Time Zone qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

On this screen you must specify your default time zone. You can specify a

time zone in three ways: select one of the continents or oceans from the

list, select other - offset from GMT, or other - specify time zone file.

&gt; To make a selection, use the arrow keys to highlight the option and

press Return to mark it [X].

Continents and Oceans

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Africa

x [ ] Americas

x [ ] Antarctica

x [ ] Arctic Ocean

x [ ] Asia

x [ ] Atlantic Ocean

x [ ] Australia

x [ ] Europe

v [ ] Indian Ocean

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Africa

x [ ] Americas

x [ ] Antarctica

x [ ] Arctic Ocean

q Country or Region qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; To make a selection, use the arrow keys to highlight the option and

press Return to mark it [X].

Countries and Regions

qqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Afghanistan

x [ ] Armenia

x [ ] Azerbaijan

x [ ] Bahrain

x [ ] Bangladesh

x [ ] Bhutan

x [ ] Brunei

x [ ] Cambodia

x [ ] China

x [ ] Cyprus

x [ ] East Timor

x [ ] Georgia

v [ ] Hong Kong

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

- [ ] Afghanistan

x [ ] Armenia

x [ ] Azerbaijan

x [ ] Bahrain

x [ ] Bangladesh

x [ ] Bhutan

x [ ] Brunei

x [ ] Cambodia

q Confirm Information qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

&gt; Confirm the following information. If it is correct, press F2;

to change any information, press F4.

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please wait...

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

q Root Password qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

Please enter the root password for this system.

The root password may contain alphanumeric and special characters. For

security, the password will not be displayed on the screen as you type it.

&gt; If you do not want a root password, leave both entries blank.

Root password:

qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq

rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic_142910-17 64-bit

Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.

Hostname: erpapptest

Reading ZFS config: done.

erpapptest console login: root

Password: Apr 9 11:32:57 erpapptest sendmail[12381]: My unqualified host name (erpapptest) unknown; sleeping for retry

Apr 9 11:32:57 erpapptest sendmail[12390]: My unqualified host name (erpapptest) unknown; sleeping for retry

Apr 9 11:33:02 erpapptest login: ROOT LOGIN /dev/console

Oracle Corporation SunOS 5.10 Generic Patch January 2005

#

# bash

bash-3.00# export TERM=vt100

bash-3.00# vi /etc/default/login

#ident "@(#)login.dfl 1.14 04/06/25 SMI"

#

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

# Set the TZ environment variable of the shell.

#

#TIMEZONE=EST5EDT

# ULIMIT sets the file size limit for the login. Units are disk blocks.

# The default of zero means no limit.

#

#ULIMIT=0

# If CONSOLE is set, root can only login on that device.

# Comment this line out to allow remote login by root.

#

# CONSOLE=/dev/console ----需要註釋掉這一行

# PA***EQ determines if login requires a password.

#

#ident "@(#)login.dfl 1.14 04/06/25 SMI"

#

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

# Set the TZ environment variable of the shell.

#

#TIMEZONE=EST5EDT

# ULIMIT sets the file size limit for the login. Units are disk blocks.

# The default of zero means no limit.

#

#ULIMIT=0

# If CONSOLE is set, root can only login on that device.

# Comment this line out to allow remote login by root.

#

"/etc/default/login" 77 lines, 2260 characters

Zone的相關配置

將Zone erpapp在aptest上關閉。

Zone erpapp在aptest上的相關配置文件在目錄:/etc/zones下面,有如下兩個文件:

Index erpapp.xml

將index文件中的黃色標的字拷貝到dbtest的相同文件中。

將erpapp.xml拷貝到dbtest的相同目錄中

bash-3.00# cd /etc/zones

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# more index

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

#

# ident "@(#)zones-index 1.2 04/04/01 SMI"

#

# DO NOT EDIT: this file is automatically generated by zoneadm(1M)

# and zonecfg(1M). Any manual changes will be lost.

#

global:installed:/

erpapp:installed:/zonedir/erpapp:49cdd4a7-2f69-4838-c089-d25a168e1835

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

通過fpt將文件傳過去

bash-3.00# ftp 192.168.0.20

Connected to 192.168.0.20.

220 dbtest FTP server ready.

Name (192.168.0.20:root): root

331 Password required for root.

Password:

230 User root logged in.

Remote system type is UNIX.

Using binary mode to transfer files.

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 621

drwxr-xr-x 3 root sys 3 Apr 6 13:36 export

lrwxrwxrwx 1 root root 9 Apr 6 13:36 bin -&gt; ./usr/bin

drwxr-xr-x 2 root sys 2 Apr 6 13:36 mnt

drwxr-xr-x 4 root root 4 Apr 6 13:36 system

drwxr-xr-x 2 root sys 54 Apr 6 13:45 sbin

drwxr-xr-x 18 root sys 19 Apr 6 13:48 kernel

drwxr-xr-x 5 root sys 5 Apr 6 13:48 platform

drwxr-xr-x 8 root bin 243 Apr 6 13:49 lib

drwxr-xr-x 4 root root 4 Apr 6 13:51 rpool

drwxr-xr-x 8 root sys 11 Apr 6 13:52 boot

drwxr-xr-x 6 root root 11 Apr 6 14:06 install

drwxr-xr-x 42 root sys 56 Apr 6 14:22 usr

drwxr-xr-x 3 root sys 3 Apr 6 14:23 global

drwxr-xr-x 45 root sys 45 Apr 6 14:24 var

drwxr-xr-x 42 root sys 42 Apr 6 14:26 opt

drwxr-xr-x 5 root sys 12 Apr 9 09:19 devices

dr-xr-xr-x 1 root root 1 Apr 9 09:19 net

dr-xr-xr-x 1 root root 1 Apr 9 09:19 home

dr-xr-xr-x 6 root root 512 Apr 9 09:20 vol

drwxr-xr-x 3 root nobody 4 Apr 9 09:20 cdrom

drwxr-xr-x 89 root sys 247 Apr 9 09:20 etc

drwxr-xr-x 23 root sys 447 Apr 9 09:20 dev

drwxrwxrwt 7 root sys 666 Apr 9 09:20 tmp

drwxr-xr-x 2 root root 2 Apr 9 09:38 erpdbpool

drwx------ 2 root root 2 Apr 9 09:52 zonedir

dr-xr-xr-x 77 root root 260032 Apr 9 11:59 proc

226 Transfer complete.

remote: -lrt

1601 bytes received in 0.096 seconds (16.35 Kbytes/s)

ftp&gt; cd /etc/zones

250 CWD command successful.

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 14

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root sys 356 Apr 9 11:59 index

-rw-r--r-- 1 root root 356 Apr 9 11:59 index-0409

226 Transfer complete.

remote: -lrt

414 bytes received in 0.00028 seconds (1456.40 Kbytes/s)

ftp&gt; !ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

ftp&gt; put erpapp.xml

200 PORT command successful.

150 Opening BINARY mode data connection for erpapp.xml.

226 Transfer complete.

local: erpapp.xml remote: erpapp.xml

363 bytes sent in 0.02 seconds (17.93 Kbytes/s)

ftp&gt; ls -lrt

200 PORT command successful.

150 Opening ASCII mode data connection for /bin/ls.

total 15

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root sys 356 Apr 9 11:59 index

-rw-r--r-- 1 root root 356 Apr 9 11:59 index-0409

-rw-r--r-- 1 root root 363 Apr 9 12:07 erpapp.xml

226 Transfer complete.

remote: -lrt

480 bytes received in 0.00063 seconds (745.40 Kbytes/s)

ftp&gt; bye

221-You have transferred 363 bytes in 1 files.

221-Total traffic for this session was 3845 bytes in 4 transfers.

221-Thank you for using the FTP service on dbtest.

221 Goodbye.

bash-3.00# ls -lrt

total 18

-r--r--r-- 1 root bin 402 Jun 21 2007 SUNWlx.xml

-r--r--r-- 1 root bin 562 Aug 9 2007 SUNWdefault.xml

-r--r--r-- 1 root bin 392 Aug 9 2007 SUNWblank.xml

-r--r--r-- 1 root bin 777 Mar 12 2008 SUNWtsoldef.xml

-rw-r--r-- 1 root root 363 Apr 9 10:27 erpapp.xml

-rw-r--r-- 1 root root 0 Apr 9 10:28 create

-rw-r--r-- 1 root root 0 Apr 9 10:28 remove

-rw-r--r-- 1 root root 0 Apr 9 10:28 add

-rw-r--r-- 1 root root 0 Apr 9 10:28 set

-rw-r--r-- 1 root sys 355 Apr 9 11:15 index

此步驟爲在dbtest操作:插入黃色標的的一行

bash-3.00# vi index

"index" 9 lines, 285 characters

# Copyright 2004 Sun Microsystems, Inc. All rights reserved.

# Use is subject to license terms.

#

# ident "@(#)zones-index 1.2 04/04/01 SMI"

#

# DO NOT EDIT: this file is automatically generated by zoneadm(1M)

# and zonecfg(1M). Any manual changes will be lost.

#

global:installed:/

erpapp:installed:/zonedir/erpapp:49cdd4a7-2f69-4838-c089-d25a168e1835

在aptest操作:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 74.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 74.6G 3.66G /zonedir/erpapp

rpool 7.04G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 20K - 33.5K -

rpool/ROOT 5.50G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.50G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 76.3M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 80K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 18K - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.5G 210M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 2.18M - 6.38M -

將erpapppool export出去

先在erpapp zone之erpapptest中,將os關閉,執行init 5

再在aptest中執行

bash-3.00# zoneadm –z erpapp halt

bash-3.00# zpool export erpapppool

在dbtest中執行

bash-3.00# zpool import erpapppool

bash-3.00# zoneadm -z erpapp boot

aptest中的操作:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

bash-3.00# zpool export erpapppool

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

rpool 7.04G 72.2G 33.5K /rpool

rpool@20120402 20K - 33.5K -

rpool@20120406 20K - 33.5K -

rpool/ROOT 5.50G 72.2G 21K legacy

rpool/ROOT@20120402 0 - 21K -

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.50G 72.2G 5.31G /

rpool/ROOT/s10x_u9wos_14a@20120402 77.2M - 4.63G -

rpool/ROOT/s10x_u9wos_14a@20120406 76.3M - 5.18G -

rpool/dump 1.00G 72.2G 1.00G -

rpool/dump@20120402 16K - 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/export 80K 72.2G 23K /export

rpool/export@20120402 18K - 23K -

rpool/export@20120406 18K - 23K -

rpool/export/home 21K 72.2G 21K /export/home

rpool/export/home@20120402 0 - 21K -

rpool/export/home@20120406 0 - 21K -

rpool/swap 553M 72.5G 210M -

rpool/swap@20120402 2.18M - 6.38M -

rpool/swap@20120406 2.18M - 6.38M -

dbtest中的操作:

bash-3.00# zpool import -f erpapppool

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 74.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 74.6G 3.66G /zonedir/erpapp

erpdbpool 73.5K 78.3G 21K /erpdbpool

rpool 6.70G 71.6G 32.5K /rpool

rpool@20120406 19K - 32.5K -

rpool@201204061730 0 - 32.5K -

rpool/ROOT 5.14G 71.6G 21K legacy

rpool/ROOT@20120406 0 - 21K -

rpool/ROOT@201204061730 0 - 21K -

rpool/ROOT/s10x_u9wos_14a 5.14G 71.6G 5.05G /

rpool/ROOT/s10x_u9wos_14a@20120406 88.0M - 4.54G -

rpool/ROOT/s10x_u9wos_14a@201204061730 11.5M - 4.92G -

rpool/dump 1.00G 71.6G 1.00G -

rpool/dump@20120406 16K - 1.00G -

rpool/dump@201204061730 16K - 1.00G -

rpool/export 62K 71.6G 23K /export

rpool/export@20120406 18K - 23K -

rpool/export@201204061730 0 - 23K -

rpool/export/home 21K 71.6G 21K /export/home

rpool/export/home@20120406 0 - 21K -

rpool/export/home@201204061730 0 - 21K -

rpool/swap 569M 72.1G 14.1M -

rpool/swap@20120406 10.2M - 14.1M -

rpool/swap@201204061730 0 - 14.1M -

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# zoneadm -z erpapp boot

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 erpapp running /zonedir/erpapp native shared

至此,zone已經建立完成,且可以在aptest和dbtest之間import和export操作

下面進行zone cluster的配置

建立erpapprg資源組

bash-3.00# clrg create -n aptest,dbtest erpapprg

註冊存儲資源

bash-3.00# clrt register SUNW.HAStoragePlus

bash-3.00# clrs create -g erpapprg -t SUNW.HAStoragePlus -x zpools=erpapppool erpappstg

把資源組置爲在線

bash-3.00# clrg online -emM erpapprg

如下提示,已經在線

(C348385) WARNING: Cannot enable monitoring on resource erpappstg because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor erpappstg' and re-enable monitoring using 'clresource monitor erpappstg'.

查看狀態

bash-3.00# scstat

------------------------------------------------------------------

-- Cluster Nodes --

Node name Status

--------- ------

Cluster node: aptest Online

Cluster node: dbtest Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status

-------- -------- ------

Transport path: aptest:e1000g3 dbtest:e1000g3 Path online

Transport path: aptest:e1000g2 dbtest:e1000g2 Path online

------------------------------------------------------------------

-- Quorum Summary from latest node reconfiguration --

Quorum votes possible: 3

Quorum votes needed: 2

Quorum votes present: 3

-- Quorum Votes by Node (current status) --

Node Name Present Possible Status

--------- ------- -------- ------

Node votes: aptest 1 1 Online

Node votes: dbtest 1 1 Online

-- Quorum Votes by Device (current status) --

Device Name Present Possible Status

----------- ------- -------- ------

Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group Primary Secondary

------------ ------- ---------

-- Device Group Status --

Device Group Status

------------ ------

-- Multi-owner Device Groups --

Device Group Online Status

------------ -------------

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name Resources

---------- ---------

Resources: erpapprg erpappstg

-- Resource Groups --

Group Name Node Name State Suspended

---------- --------- ----- ---------

Group: erpapprg aptest Online No

Group: erpapprg dbtest Offline No

-- Resources --

Resource Name Node Name State Status Message

------------- --------- ----- --------------

Resource: erpappstg aptest Online Online

Resource: erpappstg dbtest Offline Offline

------------------------------------------------------------------

-- IPMP Groups --

Node Name Group Status Adapter Status

--------- ----- ------ ------- ------

IPMP Group: aptest ipmp1 Online e1000g1 Online

IPMP Group: aptest ipmp1 Online e1000g0 Online

IPMP Group: dbtest ipmp1 Online e1000g1 Online

IPMP Group: dbtest ipmp1 Online e1000g0 Online

-- IPMP Groups in Zones --

Zone Name Group Status Adapter Status

--------- ----- ------ ------- ------

創建Cluster Pfiles文件目錄

bash-3.00# mkdir /zonedir/erpapp/cluster-pfiles

bash-3.00# cd /zonedir/erpapp

bash-3.00# ls -lrt

total 11

drwxr-xr-x 19 root root 23 Apr 11 15:39 root

drwxr-xr-x 12 root sys 51 Apr 11 15:39 dev

drwxr-xr-x 2 root root 2 Apr 11 16:00 cluster-pfiles

到這個目錄,編輯zone cluster文件 sczbt_config

bash-3.00# cd /opt/SUNWsczone/sczbt/util

bash-3.00# ls -lrt

total 28

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root bin 5520 Jul 27 2010 sczbt_config

先備份

bash-3.00# cp sczbt_config sczbt-config-0411

bash-3.00# ls -lrt

total 29

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root bin 5520 Jul 27 2010 sczbt_config

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

編輯示例如下,注意黃色標的部分爲編輯

bash-3.00# vi sczbt_config

RS=erpappzone

RG=erpapprg

PARAMETERDIR=/zonedir/erpapp/cluster-pfiles

SC_NETWORK=false

SC_LH=

FAILOVER=true

HAS_RS=erpappstg

Zonename="erpapp"

Zonebrand="native"

Zonebootopt=""

Milestone="multi-user-server"

LXrunlevel="3"

SLrunlevel="3"

Mounts=""

"sczbt_config" 159 lines, 5593 characters

bash-3.00# ls -lrt

total 40

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

-rw-r--r-- 1 root bin 5593 Apr 11 16:14 sczbt_config

bash-3.00# ./sczbt_register

sourcing ./sczbt_config

(C779822) Resource type SUNW.gds is not registered

Registration of resource erpappzone failed, please correct the wrong parameters.

Removing parameterfile /zonedir/erpapp/cluster-pfiles/sczbt_erpappzone for resource erpappzone.

如上,若有如上提示報錯,則從新註冊SUNW.gds

bash-3.00# clrt register SUNW.gds

bash-3.00# pwd

/opt

bash-3.00# cd /opt/SUNWsczone/sczbt/util

bash-3.00# ls -lrt

total 40

-r-xr-xr-x 1 root bin 7395 Jul 27 2010 sczbt_register

-rw-r--r-- 1 root root 5520 Apr 11 16:02 sczbt-config-0411

-rw-r--r-- 1 root bin 5593 Apr 11 16:14 sczbt_config

bash-3.00# ./sczbt_register

sourcing ./sczbt_config

Registration of resource erpappzone succeeded.

Validation of resource erpappzone succeeded.

至此,zone cluster也配置完成

可以查看下狀態

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

4 erpdb running /zonedir/erpdb native shared

5 erpapp running /zonedir/erpapp native shared

bash-3.00# scstat

------------------------------------------------------------------

-- Cluster Nodes --

Node name Status

--------- ------

Cluster node: aptest Online

Cluster node: dbtest Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status

-------- -------- ------

Transport path: aptest:e1000g3 dbtest:e1000g3 Path online

Transport path: aptest:e1000g2 dbtest:e1000g2 Path online

------------------------------------------------------------------

-- Quorum Summary from latest node reconfiguration --

Quorum votes possible: 3

Quorum votes needed: 2

Quorum votes present: 3

-- Quorum Votes by Node (current status) --

Node Name Present Possible Status

--------- ------- -------- ------

Node votes: aptest 1 1 Online

Node votes: dbtest 1 1 Online

-- Quorum Votes by Device (current status) --

Device Name Present Possible Status

----------- ------- -------- ------

Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group Primary Secondary

------------ ------- ---------

-- Device Group Status --

Device Group Status

------------ ------

-- Multi-owner Device Groups --

Device Group Online Status

------------ -------------

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name Resources

---------- ---------

Resources: erpdbrg erpdbstg erpdbzone

Resources: erpapprg erpappstg erpappzone

-- Resource Groups --

Group Name Node Name State Suspended

---------- --------- ----- ---------

Group: erpdbrg aptest Online No

Group: erpdbrg dbtest Offline No

Group: erpapprg aptest Online No

Group: erpapprg dbtest Offline No

-- Resources --

Resource Name Node Name State Status Message

------------- --------- ----- --------------

Resource: erpdbstg aptest Online Online

Resource: erpdbstg dbtest Offline Offline

Resource: erpdbzone aptest Online Online - Service is online.

Resource: erpdbzone dbtest Offline Offline

Resource: erpappzone aptest Online Online - Service is online.

Resource: erpappzone dbtest Offline Offline

Resource: erpappstg aptest Online Online

Resource: erpappstg dbtest Offline Offline

------------------------------------------------------------------

-- IPMP Groups --

Node Name Group Status Adapter Status

--------- ----- ------ ------- ------

IPMP Group: aptest ipmp1 Online e1000g1 Online

IPMP Group: aptest ipmp1 Online e1000g0 Online

IPMP Group: dbtest ipmp1 Online e1000g1 Online

IPMP Group: dbtest ipmp1 Online e1000g0 Online

-- IPMP Groups in Zones --

Zone Name Group Status Adapter Status

--------- ----- ------ ------- ------

下面重點闡述下資源間切換的步驟及方法

A. 分兩部分,使用cluster自動切換和使用export&import手動切換

B. Cluster自動切換

說明:無論哪種切換,均需遵循如下步驟

在zone中:關閉應用---關閉數據庫---關閉os

在global zone中:

查看zpool在哪個主機

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 81.5G 3.66G 77.8G 4% ONLINE /

erpdbpool 81.5G 3.67G 77.8G 4% ONLINE /

rpool 31.8G 6.32G 25.4G 19% ONLINE -

bash-3.00# clrg offline erpapprg --- 先執行offline,此時相當於關閉erpapp zone

bash-3.00# clrg online erpapprg --- 再執行 online,此時相當於開啓erpapp zone

但是,此時注意,因aptest爲集羣第一節點,所以,erpapprg online後,仍在aptest節點;若在dbtest server執行此操作,則online後,會將資源掛接在aptest server,原因是dbtest爲非首節點,切記!!!

執行switch操作:

如下爲dbtest server:

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP 0 global running / native shared

1 erpdb running /zonedir/erpdb native shared

- erpapp installed /zonedir/erpapp native shared

bash-3.00# clrg switch -n aptest erpdbrg ---將資源組erpdbrg切換到aptest server

切換後,去aptest server 查看:

如下zfs文件過來了

bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

erpapppool 3.66G 76.6G 21K /erpapppool

erpapppool/erpapp_fs 3.66G 76.6G 3.66G /zonedir/erpapp

erpdbpool 3.67G 76.6G 21K /erpdbpool

erpdbpool/erpdb_fs 3.67G 76.6G 3.67G /zonedir/erpdb

rpool 9.08G 24.1G 32.5K /rpool

rpool/ROOT 6.23G 24.1G 21K legacy

rpool/ROOT/s10x_u9wos_14a 6.23G 24.1G 6.23G /

rpool/dump 1.00G 24.1G 1.00G -

rpool/export 44K 24.1G 23K /export

rpool/export/home 21K 24.1G 21K /export/home

rpool/swap 1.85G 26.0G 16K -

bash-3.00# zpool list

NAME SIZE ALLOC FREE CAP HEALTH ALTROOT

erpapppool 81.5G 3.66G 77.8G 4% ONLINE /

erpdbpool 81.5G 3.67G 77.8G 4% ONLINE /

rpool 33.8G 7.24G 26.5G 21% ONLINE -

如下zone過來了,並且是running

bash-3.00# zoneadm list -ivc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

3 erpapp running /zonedir/erpapp native shared

4 erpdb running /zonedir/erpdb native shared

在cluster 模式,儘量使用cluster 命令操作,避免使用 zpool import & zpool export 的手動方式,避免與cluster資源競爭,造成文件損壞

若非得以,必須使用export&import時,切記,首先需要關閉cluster 模式

下面進入Oracle數據庫的安裝

9. 安裝oracle數據操作

3. 二)創建用戶組、用戶
1)添加用戶組:
groupadd oinstall
groupadd dba
2)添加用戶:
useradd –g oinstall –G dba –d /export/home/oracle –s /bin/bash –m oracle
{-g表示用戶所屬組、-G表示用戶所屬附加組、-d表示用戶主目錄、-s表示用戶默認shell類型、oracle表示用戶名,-m參數表示自動創建此用戶的主目錄,爲避免麻煩,請勿手動創建此目錄}
passwd oracle
{表示爲oracle用戶設置密碼,輸入該命令並回車之後,系統會提示輸入密碼、確認密碼}

4.
三)創建Oracle數據庫安裝點
新建目錄,Oracle將安裝於這些目錄下:
mkdir /oradata/oracle
mkdir /oradata/oracle/product/11.1.2
並把/oradata/oracle目錄屬主改爲oracle,屬組改爲oinstall:
chown -R oracle:oinstall /oradata/oracle
{附Solaris系統目錄說明
/: root文件系統
/bin:可執行程序,基本命令
/usr:UNIX系統文件
/dev:設備文件(邏輯設備)
/devices:設備文件(物理設備)
/etc:系統配置,系統管理數據文件
/export:允許其他系統訪問的目錄和文件
/home:用戶家目錄
/kernel:系統核心模塊
/lib:系統庫
/opt:增加的一些應用軟件
/tmp:SWAP區
/var:系統的一些管理文件}

5.
四)修改Oracle用戶的環境變量
以root用戶登陸,在oracle用戶的主目錄下找到並修改它的環境變量.bash_profile

6. 在.bash_profile添加如下內容
-bash-3.00$ cat .bash_profile

export ORACLE_HOME=/oradata/oracle/product/11.2.0/dbhome_1

export ORACLE_BASE=/oradata/oracle

export ORACLE_TERM=vt100

export ORACLE_SID=TEST

LD_LIBRARY_PATH=/oradata/oracle/product/11.2.0/dbhome_1/lib:/usr/lib

PATH=/oradata/oracle/product/11.2.0/dbhome_1/bin:/usr/sbin:/usr/bin:/usr/cluster/bin

DISPLAY=192.168.15.157:0.0;export DISPLAY
{ORACLE_BASE是Oracle根目錄,ORACLE_HOME是Oracle產品目錄,即如果你的機器裝兩個版本的Oracle系統,可以在同一個ORACLE_BASE下,但ORACLE_HOME會做兩個。}
之後,在path的開頭位置加入$ORACLE_HOME/bin
例如:set path=($ORACLE_HOME/bin /usr/ccs/bin /bin /usr/bin )請照此原樣填寫,勿使用絕對路徑。
五)修改Solaris系統參數
1)使用root賬戶登錄,創建/etc/system文件的一個備份,例如:
cp /etc/system /etc/system.orig
2)編輯/etc/system,在最後添加如下:
set noexec_user_stack=1
set semsys:seminfo_semmni=300
set semsys:seminfo_semmns=1050
set semsys:seminfo_semmsl=400
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=6400000000(服務器8G內存的情況下,不同情況按比例增減)
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=300
set shmsys:shminfo_shmseg=30
3)重啓系統使參數生效:
/usr/sbin/reboot

如下爲參考圖片,僅供參考安裝的順序及方法,請自行甄別
六)Oracle系統安裝
1)使用oracle用戶登錄ftp,將安裝程序10gr2_db_sol[1].cpio上傳至oracle用戶主目錄。
2)解壓:cpio –idmv < 10gr2_db_sol[1].cpio,如解壓時報錯,則換成root賬戶解壓。
3)以oracle用戶登錄,執行./runInstaller

第一步:注意不選“Create Starter Database”

image

3. 第二步:操作系統檢查

第三步:選擇配置選項

image

3. 第四步:顯示安裝信息彙總

3. 第五步:顯示安裝進度

第六步:安裝中途,提示執行腳本文件,以root用戶執行之。

image

3. 第七步:顯示Oracle軟件安裝完成界面。

第八步:於oracle/product/10gr2/bin目錄下執行./dbca,出現創建數據庫界面。

image

3. 第九步:選擇建庫模板,默認

4. 第十步:配置數據庫服務名

第十一步:開始數據庫配置

image

3. 第十二步:爲系統賬戶設置密碼(爲簡便起見,可以都設置爲一樣的密碼)

第十三步:設置存儲機制,這裏選擇的是文件系統,配置起來比較簡單(ASM弄了半天也沒配成功)

image

3. 第十四、十五、十六步:默認。

第十七步:內存等參數的設置
*內存:默認;進程:可以根據需要調整一下;字符集:ZHS16GBK;連接方式:Dedicated

image

3. 第十八步:默認,隨後進入安裝過程。

4. 第十九步:欲運行net manager程序,請執行netmgr,可完成對監聽程序、服務名的配置。
欲停止或啓動監聽程序,請執行:
Lsnrctl stop
Lsnrctl start
欲啓動數據實例,請執行:
sqlplus /as sysdba登錄,執行startup啓動數據庫。

5.
七)驗證安裝是否成功
1)驗證是否安裝成功:
sqlplus system/yourpassword@yoursid
SQL> select * from tab;
2)關閉、啓動正常
sqlplus /nolog
SQL&gt; connect /as sysdba
SQL&gt; shutdown immediate
SQL&gt; conn /as sysdba
SQL&gt; startup
3)查看監聽器狀態
lsnrctl status

. Vistor模擬帶庫軟件

Vistor簡介:
Vistor虛擬帶庫系統是cofio公司的一款虛擬帶庫軟件解決方案,用來實現高性能的磁盤備份,同真實帶庫一樣的磁帶管理機制提高了管理效率。Vistor支持iscsi和FC,可以模擬多種型號的磁帶庫,允許創建多個不同的帶庫,支持NBU、Legato Networker、Bakbone等多款備份軟件。

image

Vistor虛擬帶庫系統架構

兩種方法搭建Vistor系統,一種是自建linux系統,下載Vistor的tgz壓縮包,自己進行安裝;另一種是下載其ViStor VMware Image鏡像文件,配合VMvare軟件實現快捷安裝,Aladuo在此就是採用的第二種方式。

Vistor安裝環境準備:

VMware workstation 6.5
虛擬機1:來自Vistor官方網站下載的vmware image文件,實際是一個CentOS5.2的linux環境,已經集成了Vistor 2.1.1了。
虛擬機2:windows server 2003,for備份軟件,裝windows initiator的

Vistor安裝及配置步驟
1、首先Vistor的官方註冊一個用戶http://www.cofio.com/Register/,激活後進入用戶界面,左上方選ViStor Downloads,(注意,AIMstor是cofio公司的另外一個備份產品,跟Vistor不是一回事!)選擇下載ViStor VMware Image,這個就是我們要的Vistor鏡像文件了,大小239MB。
2、解壓ViStor VMware Image這個下載壓縮包,看到我們熟悉的vmware文件了,使用你的VMware workstation 6.5打開,默認看到其內存分配時1024M,磁盤空間最大是500GB。不用改,沒這麼多空間也不用改,只是一個最大值而已,回頭可以設置Vistor裏磁帶庫的磁帶大小及數量來控制磁盤空間。
3、可以看到一個linux虛擬機系統(這就是咱們的虛擬機1了)啓動,默認登錄用戶和密碼是root/password。登錄進去看一下系統和Vistor的安裝情況,如圖,Vistor已經安裝在了/usr/cofio目錄下了。

image

vistor系統安裝目錄

4、修改IP gateway等信息

由於是redhat linux系統,參考如下方法,這與solaris稍有不同

ifconfig eth0 新ip

然後編輯/etc/sysconfig/network-scripts/ifcfg-eth0,修改ip

a、修改IP地址

[aeolus@db network-scripts]$ vi ifcfg-eth0

DEVICE=eth0

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.0.50

NETMASK=255。255.255.0

GATEWAY=192.168.0.1

b、修改網關

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=vistor

GATEWAY=192.168.0.1

c、修改DNS

[aeolus@db etc]$ vi resolv.conf

nameserver 略

nameserver 略

d、重新啓動網絡配置

#/etc/init.d/network restart

找一臺同一網絡的機器,在瀏覽器裏輸入http://192.168.0.50:5050,看到Vistor虛擬帶庫的登錄界面如下:默認密碼爲空。

image

vistor登錄界面

5、進入Vistor系統管理界面,

image

vistor管理界面

6、進入Manage Library配置磁帶的大小,默認100GB,我們可以選擇Resize修改磁帶的容量,這裏我改成了1GB

image

image

設置磁帶容量

7、選擇頂部菜單的Configure Library設置虛擬帶庫的屬性,如名稱,機械手,驅動器的模擬型號,驅動器數量,槽位數量,如下圖:

image

設置vistor中帶庫的屬性

8、我們可以看到Vistor支持的機械手和磁帶驅動類型如下圖:

image

vistor支持的機械手

image

vistor支持的驅動器

9、至此Vistor虛擬帶庫就算是完成配置了,注意要手工執行運行Run,因爲默認是帶庫是offline的,

image

安裝OSB服務器及客戶端

bash-3.00# mkdir -p /usr/local/oracle/backup

bash-3.00# cd /usr

bash-3.00# ls

5bin appserver cluster games java lib net openwin postgres sadm snadm SUNWale ucblib X11R6

adm aset demo gnome jdk local news perl5 preserve sbin spool tmp vmsys xpg4

apache bin dict include kernel mail oasys pgadmin3 proc sfw src ucb X xpg6

apache2 ccs dt j2se kvm man old platform pub share sunvts ucbinclude X11

bash-3.00# cd local

bash-3.00# ls

oracle

bash-3.00# cd oracle

bash-3.00# ls -lrt

total 1

drwxr-xr-x 2 root root 2 May 4 11:17 backup

bash-3.00# cd backup

bash-3.00# ls

bash-3.00# /install/osb-10.4.0.1.0_solaris.x64_cdrom110923/setup

Welcome to Oracle's setup program for Oracle Secure Backup. This

program loads Oracle Secure Backup software from the CD-ROM to a

filesystem directory of your choosing.

This CD-ROM contains Oracle Secure Backup version 10.4.0.1.0_SOLARIS.X64.

Please wait a moment while I learn about this host... done.

- - - - - - - - - - - - - - - - - - - - - - - - - - -

1. solarisx86_64 administrative server, media server, client

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Loading Oracle Secure Backup installation tools... done.

Loading solarisx86_64 administrative server, media server, client... done.

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Oracle Secure Backup has installed a new obparameters file.

Your previous version has been saved as install/obparameters.savedbysetup.

Any changes you have made to the previous version must be

made to the new obparameters file.

Would you like the opportunity to edit the obparameters file

Please answer 'yes' or 'no' [no]:

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Loading of Oracle Secure Backup software from CD-ROM is complete.

You may unmount and remove the CD-ROM.

Would you like to continue Oracle Secure Backup installation with

'installob' now? (The Oracle Secure Backup Installation Guide

contains complete information about installob.)

Please answer 'yes' or 'no' [yes]: no

When you are ready to continue:

1. log in as (or 'su' to) root

2. cd to /usr/local/oracle/backup

3. run install/installob

bash-3.00# pwd

/usr/local/oracle/backup

bash-3.00# ls -lrt

total 25

drwxrwxrwx 7 root root 8 Sep 24 2011 apache

drwxrwxrwx 2 root root 4 Sep 24 2011 device

drwxrwxrwx 2 root root 4 Sep 24 2011 help

drwxrwxrwx 2 root root 8 Sep 24 2011 tools.solarisx86_64

drwxrwxrwx 2 root root 21 Sep 24 2011 samples

drwxrwxrwx 2 root root 93 Sep 24 2011 install

drwxr-xr-x 4 root root 4 May 4 11:18 man

bash-3.00# cd install

bash-3.00# ls -lrt

total 1404

-rwxrwxrwx 1 root root 3379 Sep 24 2011 S92OB

-rwxrwxrwx 1 root root 15840 Sep 24 2011 installhere

-rwxrwxrwx 1 root root 7259 Sep 24 2011 installdriver

-rwxrwxrwx 1 root root 8268 Sep 24 2011 initinstall.sh

-rwxrwxrwx 1 root root 2780 Sep 24 2011 hotdate.sh

-rwxrwxrwx 1 root root 1609 Sep 24 2011 dupcheck.sh

-rwxrwxrwx 1 root root 16280 Sep 24 2011 doinstall.sh

-rwxrwxrwx 1 root root 2238 Sep 24 2011 daemons.sh

-rwxrwxrwx 1 root root 5601 Sep 24 2011 checkspace.sh

-rwxrwxrwx 1 root root 2789 Sep 24 2011 checkdirs.sh

-rwxrwxrwx 1 root root 9286 Sep 24 2011 canbe

-rwxrwxrwx 1 root root 2061 Sep 24 2011 ayenay.sh

-rwxrwxrwx 1 root root 1572 Sep 24 2011 linkexists.sh

-rwxrwxrwx 1 root root 84 Sep 24 2011 killblanks.sed

-rwxrwxrwx 1 root root 112 Sep 24 2011 justtokens.sed

-rwxrwxrwx 1 root root 1258 Sep 24 2011 isnfssub.sh

-rwxrwxrwx 1 root root 1898 Sep 24 2011 isnfs.sh

-rwxrwxrwx 1 root root 2709 Sep 24 2011 iscwd.sh

-rwxrwxrwx 1 root root 6649 Sep 24 2011 instnet.sh

-rwxrwxrwx 1 root root 81520 Sep 24 2011 installob

-rwxrwxrwx 1 root root 10413 Sep 24 2011 installnet

-rwxrwxrwx 1 root root 3797 Sep 24 2011 installhost

-rwxrwxrwx 1 root root 8093 Sep 24 2011 make_sol.sh

-rwxrwxrwx 1 root root 10605 Sep 24 2011 make_hppa.sh

-rwxrwxrwx 1 root root 10375 Sep 24 2011 make_hp800.sh

-rwxrwxrwx 1 root root 7864 Sep 24 2011 makefoothld.sh

-rwxrwxrwx 1 root root 41796 Sep 24 2011 makedev

-rwxrwxrwx 1 root root 1735 Sep 24 2011 makealink.sh

-rwxrwxrwx 1 root root 1917 Sep 24 2011 makeadmdir.sh

-rwxrwxrwx 1 root root 19009 Sep 24 2011 machinfo.sh

-rwxrwxrwx 1 root root 6662 Sep 24 2011 loadlicense

-rwxrwxrwx 1 root root 5297 Sep 24 2011 hp8buses.sh

-rwxrwxrwx 1 root root 2753 Sep 24 2011 mymachinfo.sh

-rwxrwxrwx 1 root root 1029 Sep 24 2011 munghpver.sed

-rwxrwxrwx 1 root root 2441 Sep 24 2011 mintmpspace.sh

-rwxrwxrwx 1 root root 5366 Sep 24 2011 md_solaris.sh

-rwxrwxrwx 1 root root 3882 Sep 24 2011 md_sgi.sh

-rwxrwxrwx 1 root root 6952 Sep 24 2011 md_rs6000.sh

-rwxrwxrwx 1 root root 2776 Sep 24 2011 md_linux86.sh

-rwxrwxrwx 1 root root 1139 Sep 24 2011 md_linux86-glibc.sh

-rwxrwxrwx 1 root root 4312 Sep 24 2011 md_hppa.sh

-rwxrwxrwx 1 root root 7975 Sep 24 2011 md_hp800.sh

-rwxrwxrwx 1 root root 2564 Sep 24 2011 md_chkexist.sh

-rwxrwxrwx 1 root root 1410 Sep 24 2011 maketarlst.sed

-rwxrwxrwx 1 root root 46518 Sep 24 2011 makelinks.sh

-rwxrwxrwx 1 root root 3543 Sep 24 2011 protectwc.sh

-rwxrwxrwx 1 root root 3687 Sep 24 2011 protect.sh

-rwxrwxrwx 1 root root 13755 Sep 24 2011 probedev

-rwxrwxrwx 1 root root 3860 Sep 24 2011 prefer_rmt

-rwxrwxrwx 1 root root 3869 Sep 24 2011 prefer_ob

-rwxrwxrwx 1 root root 20859 Sep 24 2011 obparameters

-rwxrwxrwx 1 root root 6498 Sep 24 2011 obndf

-rwxrwxrwx 1 root root 1334 Sep 24 2011 obgserverfiles

-rwxrwxrwx 1 root root 2410 Sep 24 2011 obgclientfiles

-rwxrwxrwx 1 root root 1291 Sep 24 2011 obgadminfiles

-rwxrwxrwx 1 root root 2318 Sep 24 2011 obclientfiles

-rwxrwxrwx 1 root root 1447 Sep 24 2011 obadminfiles

-rwxrwxrwx 1 root root 251 Sep 24 2011 nfsmpars.sed

-rwxrwxrwx 1 root root 1673 Sep 24 2011 nfsmount.sh

-rwxrwxrwx 1 root root 4816 Sep 24 2011 tgttmproom.sh

-rwxrwxrwx 1 root root 2981 Sep 24 2011 tgtobroom.sh

-rwxrwxrwx 1 root root 2865 Sep 24 2011 tgtmachinfo.sh

-rwxrwxrwx 1 root root 4937 Sep 24 2011 stoprb

-rwxrwxrwx 1 root root 3789 Sep 24 2011 stopd.sh

-rwxrwxrwx 1 root root 1857 Sep 24 2011 sol_bus.sh

-rwxrwxrwx 1 root root 5176 Sep 24 2011 setupserver.sh

-rwxrwxrwx 1 root root 13151 Sep 24 2011 setupobc.sh

-rwxrwxrwx 1 root root 2667 Sep 24 2011 setlists.sh

-rwxrwxrwx 1 root root 4890 Sep 24 2011 setdevdfts.sh

-rwxrwxrwx 1 root root 1832 Sep 24 2011 selhosts.sh

-rwxrwxrwx 1 root root 1262 Sep 24 2011 quietstderr.sh

-rwxrwxrwx 1 root root 5795 Sep 24 2011 osbcvt

-rwxrwxrwx 1 root root 2472 Sep 24 2011 observerfiles

-rwxrwxrwx 1 root root 7136 Sep 24 2011 webconfig

-rwxrwxrwx 1 root root 5160 Sep 24 2011 webcheck.sh

-rwxrwxrwx 1 root root 8772 Sep 24 2011 valdrives.sh

-rwxrwxrwx 1 root root 27632 Sep 24 2011 updaterc.sh

-rwxrwxrwx 1 root root 2883 Sep 24 2011 updatendf.sh

-rwxrwxrwx 1 root root 1185 Sep 24 2011 updatecnf.sh

-rwxrwxrwx 1 root root 3889 Sep 24 2011 unmk_rs6000.sh

-rwxrwxrwx 1 root root 18932 Sep 24 2011 uninstallob

-rwxrwxrwx 1 root root 30555 Sep 24 2011 uninstallhere

-rwxrwxrwx 1 root root 4460 Sep 24 2011 uiparse.sh

-rwxrwxrwx 1 root root 58 Sep 24 2011 trexit

-rwxrwxrwx 1 root root 34 Sep 24 2011 trenter

-rwxrwxrwx 1 root root 2913 Sep 24 2011 tmp_room.sh

-rwxr-xr-x 1 root root 1215 Sep 24 2011 size_table.sh

-rwxr-xr-x 1 root root 20860 May 4 11:19 obparameters.savedbysetup

bash-3.00# ./installob

Welcome to installob, Oracle Secure Backup's installation program.

For most questions, a default answer appears enclosed in square brackets.

Press Enter to select this answer.

Please wait a few seconds while I learn about this machine... done.

Have you already reviewed and customized install/obparameters for your

Oracle Secure Backup installation [yes]?

- - - - - - - - - - - - - - - - - - - - - - - - - - -

Oracle Secure Backup is not yet installed on this machine.

Oracle Secure Backup's Web server has been loaded, but is not yet configured.

Choose from one of the following options. The option you choose defines

the software components to be installed.

Configuration of this host is required after installation completes.

You can install the software on this host in one of the following ways:

(a) administrative server, media server and client

(b) media server and client

(c) client

If you are not sure which option to choose, please refer to the Oracle

Secure Backup Installation Guide. (a,b or c) [a]?

Beginning the installation. This will take just a minute and will produce

several lines of informational output.

Installing Oracle Secure Backup on aptest (solaris version 5.10)

You must now enter a password for the Oracle Secure Backup encryption

key store. Oracle suggests you choose a password of at least 8

characters in length, containing a mixture of alphabetic and numeric

characters.

Please enter the key store password:

Re-type password for verification:

You must now enter a password for the Oracle Secure Backup 'admin' user.

Oracle suggests you choose a password of at least 8 characters in length,

containing a mixture of alphabetic and numeric characters.

Please enter the admin password:

Re-type password for verification:

You should now enter an email address for the Oracle Secure Backup 'admin'

user. Oracle Secure Backup uses this email address to send job summary

reports and to notify the user when a job requires input. If you leave this

blank, you can set it later using the obtool's 'chuser' command.

Please enter the admin email address: [email protected]

generating links for admin installation with Web server

updating default library list via crle to include /usr/local/oracle/backup/.lib.solarisx86_64

updating secure library list via crle to include /usr/local/oracle/backup/.lib.solarisx86_64

checking Oracle Secure Backup's configuration file (/etc/obconfig)

setting Oracle Secure Backup directory to /usr/local/oracle/backup in /etc/obconfig

setting local database directory to /usr/etc/ob in /etc/obconfig

setting temp directory to /usr/tmp in /etc/obconfig

setting administrative directory to /usr/local/oracle/backup/admin in /etc/obconfig

protecting the Oracle Secure Backup directory

installing /etc/init.d/OracleBackup for observiced start/kill ops at

operating system run-level transition

installing start-script (link) /etc/rc2.d/S92OracleBackup

installing kill-script (link) /etc/rc1.d/K01OracleBackup

installing kill-script (link) /etc/rc0.d/K01OracleBackup

initializing the administrative domain

Is aptest connected to any tape libraries that you'd like to use with

Oracle Secure Backup [no]?

Is aptest connected to any tape drives that you'd like to use with

Oracle Secure Backup [no]?

Installation summary:

Installation Host OS Driver OS Move Reboot

Mode Name Name Installed? Required? Required?

admin aptest solaris no no no

Oracle Secure Backup is now ready for your use

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章