星期三, 9月 02, 2015

[轉載 ]Oracle RAC (11.2.0.4版本) For AIX 6.1安装手册

轉載  Oracle Rac 11GR211.2.0.4For AIX6.1+ASM安裝手冊

Ref: http://blog.csdn.net/alangmei/article/details/18310381

 

部分截圖採用了網上別人的圖片以及部分章節

2 安裝環境說明

 

節點

節點名稱

實例名稱

資料庫名稱

處理器

RAM

作業系統

Rac1

rac1

Rac

4cpu*8*4228Mhz

32GB

AIX6.1

Rac2

rac2

4cpu*8*4228Mhz

32GB

AIX6.1

網路配置

節點名稱

公共 IP 地址

專用 IP 位址

虛擬 IP 位址

SCAN 名稱

SCAN IP 地址

Rac1

172.1.1.204

192.168.0.204

172.1.1.206

Scan-ip

172.1.1.208

Rac2

172.1.1.205

192.168.0.205

172.1.1.207

Oracle 軟體元件

軟體元件

作業系統使用者

主組

輔助組

主目錄

Oracle 基目錄/Oracle 主目錄

Grid Infra

grid

oinstall

asmadminasmdbaasmoperoinstall

/home/grid

/u01/app/grid

/u01/app/11.2/grid

Oracle RAC

oracle

oinstall

dbaoperasmdbaoinstall

/home/oracle

/u01/app /oracle

/u01/app/oracle/product/11.2.0/db_1

存儲元件

存儲元件

檔案系統

卷大小

ASM 卷組名

ASM 冗餘

設備名

OCR/VOTING

ASM

50G

CRSDG

normal

/dev/rhdisk4-6

數據

ASM

600G

DATA

normal

/dev/rhdisk7-9

恢復區

ASM

100G

FRA_ARCHIVE

Normal

/dev/rhdisk10-12

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Oracle RAC架構中共有四種IP,分別是Public IPPrivate IPVIPSCAN IP。它們的作用如下:

Private IP:私有IP用於節點間同步心跳,這個對於用戶層面,可以直接忽略,簡單理解,這個IP是用來保證兩台伺服器同步資料用的。

Public IP:公有IP一般用於管理員使用,用來確保可以操作到正確的機器,也叫真實IP

VIP:虛擬IP用於用戶端應用,一般情況下VIP是飄在配置Public IP位址的網卡上的。VIP支援失效轉移,通俗說就是配置該VIP的節點宕機了,另一個主機節點會自動接管該VIP,而用戶端沒有任何感覺。這也是為什麼要使用RAC的原因之一,另一個原因,我認為是負載均衡。用戶端在配置tnsnames.ora時,有些場合是要使用的vip,而有些場合又必須使用Public IP。例如,當你在定位一個資料庫的鎖死時,使用Public IP,可以確保連到你想處理的機器。相反此時使用VIP時,會出現不確定性,因為伺服器預設是開啟負載均衡的,也就是有可能你想連A機,系統卻給你分配了B機。

 

SCAN IP:在Oracle 11gR2以前,如果資料庫採用了RAC架構,在用戶端的tnsnames中,需要配置多個節點的連接資訊,從而實現諸如負載均衡,failover等等RAC的特性。因此,當資料庫RAC集群需要添加或刪除節點時,需要及時對用戶端機器的tns進行更新,以免出現安全隱患。在11gR2中,為了簡化該項配置工作,引入了SCANSingle ClientAccess Name)的特性,該特性的好處在於,在資料庫與用戶端之間,添加了一層虛擬的服務層,就是所謂的SCAN IP以及SCAN IP Listener,在用戶端僅需要配置SCAN IPTNS資訊,通過SCAN IPListener,連接後臺集群資料庫。這樣,不論集群資料庫是否有添加或者刪除節點的操作,均不會對client產生影響。



兩個RAC節點主機的規劃:

閘道:10.1.0.254

主機名稱稱

主機別名

類型

IP地址

解析方式

rac1

rac1

Public

172.1.1.204/255.255.255.0

host

rac1-vip

rac1-vip

Virtual

172.1.1.206/255.255.255.0

host

rac1-priv

rac1-priv

Private

192.168.0.204/255.255.255.0

host

rac2

rac2

Public

172.1.1.205/255.255.255.0

host

rac2-vip

rac2-vip

Virtual

172.1.1.207/255.255.255.0

host

rac2-priv

rac2-priv

Private

192.168.0.205/255.255.255.0

host

Scan-ip

Scan-ip

Virtual

172.1.1.208/255.255.255.0

host

2.4 存儲盤規劃

存儲盤名稱

大小

用途

hdisk 4

50GB

CRSDG

hdisk 5

51GB

hdisk 6

52GB

hdisk 7

600GB

DATA

hdisk 8

601GB

hdisk 9

602GB

hdisk10

100GB

FRA_ARCHIVE

hdisk11

101GB

hdisk12

102GB

2.5 資料庫安全資訊

項目名稱

用戶名

口令或實例

作業系統使用者

root

 

資料庫網格安裝使用者

Grid

 

資料庫安裝使用者

oracle

 

集群實例名

rac

 

ASM管理

Sys

 

資料庫管理

sys/system

 

審計用戶

rac_vault

 

2.6 安裝目錄規劃

安裝目錄規劃原則:建立/u01檔案系統用來安裝griddatbase程式。程式都安裝在/u01/app下面,對於griddatabase分別建立不同的目錄,分配不同的許可權。其中gridORACLE_BASEORACLE_HOME建議安裝在不同的目錄下,具體規劃如下:

新建70G lvoralv  

新建檔案系統,掛載點:/u01

grid base目錄:/u01/app/grid     #grid用戶的ORACLE_BASE

grid asm安裝目錄:/u01/app/11.2/grid   #grid用戶的ORACLE_HOME,也即是安裝時的software location

Oracle base目錄:/u01/app/oracle      #oracle用戶的ORACLE_BASE

注:此規劃為後來總結,本次安裝中與此略有出入。Grid使用者的ORACLE_BASEORACLE_HOME都需要手工創建。Oracle使用者只創建ORACLE_BASE目錄即可。

3 預先安裝任務清單的檢查配置

說明:下面所列檢查配置任務,預設需要在所有RAC節點執行,有很少的操作步驟只需在一個節點執行即可,這些步驟會一一說明,在檢查配置時應該注意。

3.1 檢查主機硬體設定

主機硬體檢查包括:可用記憶體,頁面交換空間、可用硬碟空間、/tmp目錄可用空間。

1.         使用如下命令查看主機的記憶體和交換空間,記憶體至少2.5G,交換空間應為物理可用記憶體的2倍。

# /usr/sbin/lsattr -HE -l sys0 -a realmem

attribute value  description                   user_settable

 

realmem  32243712 Amount of usable physical memory in Kbytes False

#/usr/sbin/lsps -a

 

2.         檢查硬體架構:#/usr/bin/getconf HARDWARE_BITMODE,要求64位元硬體架構。

3.         檢查集群軟體和資料庫軟體安裝目錄至少有6.5GB可用空間,/tmp目錄至少有1GB可用空間:#df -h

4.查看主機資訊

#prtconf

System Model: IBM,8231-E1D

Machine SerialNumber:

Processor Type:PowerPC_POWER7

ProcessorImplementation Mode: POWER 7

Processor Version:PV_7_Compat

Number OfProcessors: 8

Processor ClockSpeed: 4228 MHz

CPU Type: 64-bit

Kernel Type: 64-bit

LPAR Info: 106-E80AT

Memory Size: 31488MB

Good Memory Size:31488 MB

Platform Firmwarelevel: AL770_052

Firmware Version:IBM,AL770_052

Console Login:enable

Auto Restart: true

Full Core: false

 

Network Information

        Host Name: rac1

        IP Address: 172.1.1.204

        Sub Netmask: 255.255.255.0

        Gateway: 10.1.0.254

        Name Server:

        Domain Name:

 

Paging SpaceInformation

        Total Paging Space: 9216MB

        Percent Used: 1%

 

Volume GroupsInformation

==============================================================================

Active VGs

==============================================================================

rootvg:

PV_NAME           PV STATE          TOTAL PPs   FREE PPs   FREE DISTRIBUTION

hdisk0            active            558        304         111..80..00..01..112

hdisk1           active            558         450         111..86..30..111..112

INSTALLED RESOURCELIST

 

The followingresources are installed on the machine.

+/- = Added ordeleted from Resource List.

*   = Diagnostic support not available.

       

  Model Architecture: chrp

  Model Implementation: Multiple Processor, PCIbus

       

+ sys0                                                                         System Object

+ sysplanar0                                                                    SystemPlanar

* vio0                                                                         Virtual I/O Bus

* vsa1             U78AB.001.WZSKA2R-P1-T2                                      LPARVirtual Serial Adapter

* vty1             U78AB.001.WZSKA2R-P1-T2-L0                                   AsynchronousTerminal

* vsa0             U78AB.001.WZSKA2R-P1-T1                                      LPARVirtual Serial Adapter

* vty0             U78AB.001.WZSKA2R-P1-T1-L0                                   AsynchronousTerminal

* pci8             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ sissas2          U78AB.001.WZSKA2R-P1-C6-T1                                   PCI Expressx8 Ext Dual-x4 3Gb SAS Adapter

* sas2             U78AB.001.WZSKA2R-P1-C6-T1                                   ControllerSAS Protocol

* sfwcomm6                                                                     SAS Storage Framework Comm

* sata2            U78AB.001.WZSKA2R-P1-C6-T1                                   ControllerSATA Protocol

* pci7             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ ent6             U78AB.001.WZSKA2R-P1-C5-T1                                   2-Port Gigabit Ethernet-SX PCI-ExpressAdapter (14103f03)

+ ent7             U78AB.001.WZSKA2R-P1-C5-T2                                   2-PortGigabit Ethernet-SX PCI-Express Adapter (14103f03)

* pci6             U78AB.001.WZSKA2R-P1                                         PCI Express Bus

+ ent4             U78AB.001.WZSKA2R-P1-C4-T1                                   2-PortGigabit Ethernet-SX PCI-Express Adapter (14103f03)

+ ent5             U78AB.001.WZSKA2R-P1-C4-T2                                   2-Port Gigabit Ethernet-SX PCI-ExpressAdapter (14103f03)

* pci5             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ fcs2             U78AB.001.WZSKA2R-P1-C3-T1                                   8Gb PCIExpress Dual Port FC Adapter (df1000f114108a03)

* fcnet2           U78AB.001.WZSKA2R-P1-C3-T1                                   FibreChannel Network Protocol Device

+ fscsi2           U78AB.001.WZSKA2R-P1-C3-T1                                   FC SCSI I/OController Protocol Device

* sfwcomm2         U78AB.001.WZSKA2R-P1-C3-T1-W0-L0                             Fibre ChannelStorage Framework Comm

+ fcs3             U78AB.001.WZSKA2R-P1-C3-T2                                   8Gb PCIExpress Dual Port FC Adapter (df1000f114108a03)

* fcnet3           U78AB.001.WZSKA2R-P1-C3-T2                                   FibreChannel Network Protocol Device

+ fscsi3           U78AB.001.WZSKA2R-P1-C3-T2                                   FC SCSI I/OController Protocol Device

* sfwcomm3         U78AB.001.WZSKA2R-P1-C3-T2-W0-L0                             Fibre ChannelStorage Framework Comm

* pci4             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ fcs0             U78AB.001.WZSKA2R-P1-C2-T1                                   8Gb PCI ExpressDual Port FC Adapter (df1000f114108a03)

* fcnet0           U78AB.001.WZSKA2R-P1-C2-T1                                   FibreChannel Network Protocol Device

+ fscsi0           U78AB.001.WZSKA2R-P1-C2-T1                                   FC SCSI I/OController Protocol Device

* hdisk8          U78AB.001.WZSKA2R-P1-C2-T1-W5000D3100070E30C-L5000000000000  Compellent FC SCSI Disk Drive

* hdisk9          U78AB.001.WZSKA2R-P1-C2-T1-W5000D3100070E30C-L6000000000000  Compellent FC SCSI Disk Drive

* sfwcomm0         U78AB.001.WZSKA2R-P1-C2-T1-W0-L0                             Fibre ChannelStorage Framework Comm

+ fcs1             U78AB.001.WZSKA2R-P1-C2-T2                                   8Gb PCIExpress Dual Port FC Adapter (df1000f114108a03)

* fcnet1           U78AB.001.WZSKA2R-P1-C2-T2                                   FibreChannel Network Protocol Device

+ fscsi1           U78AB.001.WZSKA2R-P1-C2-T2                                   FC SCSI I/OController Protocol Device

hdisk4          U78AB.001.WZSKA2R-P1-C2-T2-W5000D3100070E30A-L1000000000000  Compellent FC SCSI Disk Drive

*hdisk5          U78AB.001.WZSKA2R-P1-C2-T2-W5000D3100070E30A-L2000000000000  Compellent FC SCSI Disk Drive

*hdisk6           U78AB.001.WZSKA2R-P1-C2-T2-W5000D3100070E30A-L3000000000000  Compellent FC SCSI Disk Drive

*hdisk7          U78AB.001.WZSKA2R-P1-C2-T2-W5000D3100070E30A-L4000000000000  Compellent FC SCSI Disk Drive

* sfwcomm1         U78AB.001.WZSKA2R-P1-C2-T2-W0-L0                             Fibre Channel StorageFramework Comm

* pci3             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ ent0             U78AB.001.WZSKA2R-P1-C7-T1                                   4-PortGigabit Ethernet PCI-Express Adapter (e414571614102004)

+ ent1             U78AB.001.WZSKA2R-P1-C7-T2                                   4-PortGigabit Ethernet PCI-Express Adapter (e414571614102004)

+ ent2             U78AB.001.WZSKA2R-P1-C7-T3                                   4-Port Gigabit Ethernet PCI-ExpressAdapter (e414571614102004)

+ ent3             U78AB.001.WZSKA2R-P1-C7-T4                                   4-PortGigabit Ethernet PCI-Express Adapter (e414571614102004)

* pci2             U78AB.001.WZSKA2R-P1                                         PCI ExpressBus

+ sissas1          U78AB.001.WZSKA2R-P1-C18-T1                                  PCIe x4Internal 3Gb SAS RAID Adapter

* sas1             U78AB.001.WZSKA2R-P1-C18-T1                                  ControllerSAS Protocol

* sfwcomm5                                                                     SAS Storage Framework Comm

+ ses0             U78AB.001.WZSKA2R-P2-Y2                                      SASEnclosure Services Device

+ ses1             U78AB.001.WZSKA2R-P2-Y1                                      SASEnclosure Services Device

* tmscsi1         U78AB.001.WZSKA2R-P1-C18-T1-LFE0000-L0                       SAS I/O ControllerInitiator Device

* sata1            U78AB.001.WZSKA2R-P1-C18-T1                                  Controller SATAProtocol

* pci1             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

* pci9             U78AB.001.WZSKA2R-P1                                         PCIBus

+ usbhc0           U78AB.001.WZSKA2R-P1                                         USBHost Controller (33103500)

+ usbhc1           U78AB.001.WZSKA2R-P1                                         USBHost Controller (33103500)

+ usbhc2           U78AB.001.WZSKA2R-P1                                         USB Enhanced HostController (3310e000)

* pci0             U78AB.001.WZSKA2R-P1                                         PCIExpress Bus

+ sissas0          U78AB.001.WZSKA2R-P1-T9                                      PCIe x4Planar 3Gb SAS RAID Adapter

* sas0             U78AB.001.WZSKA2R-P1-T9                                     Controller SAS Protocol

* sfwcomm4                                                                     SAS StorageFramework Comm

+ hdisk0           U78AB.001.WZSKA2R-P3-D1                                      SAS DiskDrive (300000 MB)

+ hdisk1          U78AB.001.WZSKA2R-P3-D2                                      SAS DiskDrive (300000 MB)

+ hdisk2          U78AB.001.WZSKA2R-P3-D3                                      SAS Disk Drive (300000 MB)

+ hdisk3          U78AB.001.WZSKA2R-P3-D4                                      SAS DiskDrive (300000 MB)

+ ses2             U78AB.001.WZSKA2R-P2-Y1                                      SASEnclosure Services Device

* tmscsi0         U78AB.001.WZSKA2R-P1-T9-LFE0000-L0                           SAS I/O ControllerInitiator Device

* sata0            U78AB.001.WZSKA2R-P1-T9                                     Controller SATA Protocol

+ cd0              U78AB.001.WZSKA2R-P3-D7                                      SATADVD-RAM Drive

+ L2cache0                                                                     L2 Cache

+ mem0                                                                         Memory

+ proc0                                                                         Processor

+ proc4                                                                        Processor

+ proc8                                                                        Processor

+ proc12                                                                       Processor

+ proc16                                                                       Processor

+ proc20                                                                       Processor

+ proc24                                                                       Processor

+ proc28                                                                       Processor

 

3.2 主機網路配置

主機網路設置檢查:hosts檔系修改、網卡IP配置。

1.       編輯hosts檔,將如下內容添加到hosts檔中,指定Public IPVIPPrivate IP

#public

172.1.1.204  rac1

172.1.1.205  rac2

 

# private

192.168.0.204     rac1-priv

192.168.0.205     rac2-priv

 

# virtual

172.1.1.206  rac1-vip

172.1.1.207  rac2-vip

 

#scan

172.1.1.208  scan-ip

2.       網卡的IP位址已經在系統安裝過程中配置完成,可以使用如下命令檢查IP配置情況:#ifconfig–a

3.3 檢查主機軟體配置

主機軟體配置檢查包括:作業系統版本、系統內核版本、必須套裝軟體安裝。

1.       檢查作業系統版本:#oslevel -s,最低要求6100-02-01

2.       檢查作業系統內核:#bootinfo -K,要求64位內核。

3.       檢出主機SSH配置:#lssrc -ssshd

4.       系統必須安裝如下(或更高版本)套裝軟體:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat                   6.1.2.1 or later

bos.perf.perfstat

bos.perf.proctools

xlC.aix61.rte.                           10.1.0.0 or later

xlC.rte.                              10.1.0.0or later

gpfs.base                          3.2.1.8or later(當使用GPFS共用檔案系統時安裝)

可以使用如下命令:

# lslpp -l bos.adt.*

# lslpp -l bos.perf.*

# lslpp -l xlC.*

# lslpp -l gpfs.*

來查看系統是否已經安裝相應的套裝軟體。如果系統中缺少上述套裝軟體或者版本較低,請使用系統安裝光碟安裝相關套裝軟體。

AIX 6.1需要安裝如下套裝軟體:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat 6.1.2.1 or later

bos.perf.perfstat

bos.perf.proctools

rsct.basic.rte

rsct.compat.clients.rte

xlC.aix61.rte 10.1.0.0 (or later)

AIX 5.3需要安裝如下套裝軟體:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat 5.3.9.0 or later

bos.perf.perfstat

bos.perf.proctools

rsct.basic.rte

rsct.compat.clients.rte

xlC.aix50.rte 10.1.0.0 (or later)

以上filesets安裝與否可以用命令lslpp l進行檢查確認。預設安裝是不全的,需要手工進行添加。同時系統磁片的版本與上述也有差異,安裝嘗試。

其它單個補丁的要求如下:

AIX 6L installations All AIX 6L 6.1 installations Authorized Problem Analysis

Reports (APARs) for AIX 5L v. 5.3 ML06, and the following AIX

fixes:

IZ41855

IZ51456

IZ52319

AIX 5L installations All AIX 5L 5.3 installations Authorized Problem Analysis

Reports (APARs) for AIX 5L v. 5.3 ML06, and the following AIX

fixes:

IZ42940

IZ49516

IZ52331

驗證:#/usr/sbin/instfix -i -k IZ41855

安裝補丁:

由於6100-04不需要任何補丁,所以我們將系統升級到6100-04(但是安裝grid的時候還是出現3個包未安裝提示)

1 IBM官網上下載6100-04-00-0943

2 將補丁文件上傳至/tmp/tools

3smit update_all

選擇不提交,保存被覆蓋的檔,可以回滾操作,接受授權合約

COMMIT software updates?                          No

SAVE replaced files?                              yes

ACCEPT new license agreements?                    Yes

升級完後查看:

oslevel -s

6100-04-01-0944

 

5.       檢查java版本:#java-version,要求1.6版本64位。

 

3.4 創建作業系統組和使用者

建立使用者組,使用者和目錄(簡易版,如果是11.2.0.4以上,rootpre.sh會要求更為細緻的組,比如asmadmin等等,具體可參考文檔)

創建相應的作業系統組和使用者,先創建組,然後創建用戶:

Ø  root用戶使用如下命令為網格及Oracle用戶創建OS組:

#mkgroup-'A' id='501' adms='root' oinstall

#mkgroup-'A' id='502' adms='root' asmadmin

#mkgroup-'A' id='503' adms='root' asmdba

#mkgroup-'A' id='504' adms='root' asmoper

#mkgroup-'A' id='505' adms='root' dba

#mkgroup-'A' id='506' adms='root' oper

Ø  創建Oracle軟體所有者:

#mkuser id='501' pgrp='oinstall'groups='dba,asmadmin,asmdba,asmoper' home='/home/grid' fsize=-1 cpu=-1 data=-1rss=-1 stack=-1 stack_hard=-1capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

#mkuser id='502' pgrp='oinstall'groups='dba,asmdba,oper' home='/home/oracle' fsize=-1 cpu=-1 data=-1 rss=-1stack=-1 stack_hard=-1capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

Ø  檢查上面創建的兩個用戶:

#id grid

#id oracle

Ø  使用passwd命令為grid(密碼:grid)和oracle(密碼:oracle)帳戶設置密碼。

#passwdgrid

#passwdoracle

3.5 創建軟體安裝目錄結構並更改許可權

修改磁片陣列為grid oinstall(如果是11.2.0.4以上,根據設置的需求,可能會要求更改為 grid dba,是具體設置而定):

創建Oracle軟體相應的目錄結構,包括:GRID目錄,RDBMS目錄。

注意grid使用者的BASE目錄和HOME目錄不能有父子關係。

Ø  root使用者創建"Oracle inventory 目錄",並更改許可權:

#mkdir-p /u01/app/oraInventory

#chown-R grid:oinstall /u01/app/oraInventory

#chmod-R 775 /u01/app/oraInventory

Ø  root使用者創建"Grid Infrastructure BASE 目錄"

#mkdir-p /u01/app/grid

#chowngrid:oinstall /u01/app/grid

#chmod-R 775 /u01/app/grid

Ø  root使用者創建"Grid Infrastructure Home 目錄"

#mkdir-p /u01/app/11.2.0/grid

#chown-R grid:oinstall /u01/app/11.2.0/grid

#chmod-R 775 /u01/app/11.2.0/grid

Ø  root使用者創建"Oracle Base 目錄"

#mkdir-p /u01/app/oracle

#mkdir/u01/app/oracle/cfgtoollogs

#chown-R oracle:oinstall /u01/app/oracle

#chmod-R 775 /u01/app/oracle

Ø  root使用者創建"Oracle RDBMS Home 目錄"

#mkdir-p /u01/app/oracle/product/11.2.0/db_1

#chown-R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1

#chmod-R 775 /u01/app/oracle/product/11.2.0/db_1

3.6 修改用戶環境參數檔

如果分別以oracle用戶和grid用戶修改環境參數檔,修改之後可以使用如下命令使其生效:$.profile。如果使用root使用者修改則不需要重新載入環境設定檔。

1.       rac1節點上設置grid使用者和oracle的環境變數參數。

Ø  grid用戶:編輯家目下的.profile檔,添加如下內容:

umask 022

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_SID=+ASM1

export ORACLE_HOSTNAME=rac1

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export NLS_DATE_FORMAT="yyyy-mm-ddhh24:mi:ss"

export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

Ø  oracle用戶:編輯家目下的.profile檔,添加如下內容:

umask 022

export ORACLE_BASE=/u01/app/oracle

exportORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

export ORACLE_SID=rac1

export ORACLE_HOSTNAME=rac1

export ORACLE_UNQNAME=rac

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export NLS_DATE_FORMAT="yyyy-mm-ddhh24:mi:ss"

export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

2.       rac2節點上設置grid使用者和oracle的環境變數參數。

Ø  grid用戶:編輯家目下的.profile檔,添加如下內容:

umask 022

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_SID=+ASM2

export ORACLE_HOSTNAME=rac2

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export NLS_DATE_FORMAT="yyyy-mm-ddhh24:mi:ss"

export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

Ø  oracle用戶:編輯家目下的.profile檔,添加如下內容:

umask 022

exportORACLE_BASE=/u01/app/oracle

exportORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

export ORACLE_SID=rac2

export ORACLE_HOSTNAME=rac2

export ORACLE_UNQNAME=rac

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export NLS_DATE_FORMAT="yyyy-mm-ddhh24:mi:ss"

export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

注意:環境變數要注意是否含有空格,雖然安裝可以正常進行,但是安裝完後命令都不能正常執行,比如你在grid用戶執行asmcmd,進入的是一個空實例,你無法管理ASM實例,那麼出了問題就回天無力了,所以還是注意檢查下,就算安裝完了,也需要重新請重裝。

3.7 系統部分參數修改

系統參數的修改包括:虛擬記憶體管理參數、網路參數、系統內核參數、非同步IO

AIX 6.1以後,下屬值貌似是缺省值了,跟Oracle installguide一致,因此無需修改:
vmo -p -o minperm%=3
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90
vmo -p -o lru_file_repage=0
vmo -p -o strict_maxclient=1
vmo -p -o strict_maxperm=0

1.       分別使用如下命令查看虛擬記憶體管理參數,

   vmo -L minperm%

   vmo -L maxperm%

   vmo -L maxclient%

   vmo -L lru_file_repage

   vmo -L strict_maxclient

vmo -L strict_maxperm

如果設置不合適,使用如下命令修改:

    #vmo -p -o minperm%=3

    #vmo -p -o maxperm%=90

    #vmo -p -o maxclient%=90

    #vmo -p -o lru_file_repage=0

    #vmo -p -o strict_maxclient=1

    #vmo -p -o strict_maxperm=0

2.       檢查網路參數設置

Ø  ephemeral參數:

使用命令no -a |fgrep ephemeral可以查看當前系統ephemeral參數設置,建議的參數設置如下

        tcp_ephemeral_high = 65500

        tcp_ephemeral_low = 9000

        udp_ephemeral_high= 65500

        udp_ephemeral_low = 9000

如果系統中參數設置和上述值不一樣,使用命令修改:

#no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500

#no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500

Ø  使用如下命令修改網路可調整參數:

    #no -r -o rfc1323=1

    #no -r -o ipqmaxlen=512

    #no -p -o sb_max=4194304

    #no -p -o tcp_recvspace=65536

    #no -p -o tcp_sendspace=65536

    #no -p -o udp_recvspace=1351680 該值是udp_sendspace10倍,但須小於sb_max

#no -p -o udp_sendspace=135168

備註:-r表示reboot後生效,-p表示即刻生效.

3.       檢查內核參數maxuproc(建議16384)和ncargs(至少128

#lsattr -E -l sys0 -a ncargs

#lsattr -E -l sys0 -a maxuproc

如果設置不合適使用如下命令修改:

#chdev -l sys0 -a ncargs=256

#chdev -l sys0 -a maxuproc=16384

4.       檢查非同步IO是否開啟,AIX6.1預設系統已經開啟,使用如下命令查詢:

#ioo -a | more #ioo -o aio_maxreqs

注意:AIX5.3使用如下命令查看lsattr -El aio0 -a maxreqs

 

3.8 配置共用存儲

下面的幾步操作均需要在所有節點執行。

1.       修改物理卷的屬主和許可權:

#chown grid:asmadmin /dev/rhdisk4

#chown grid:asmadmin /dev/rhdisk5

#chown grid:asmadmin /dev/rhdisk6

#chown grid:asmadmin /dev/rhdisk7

#chown grid:asmadmin /dev/rhdisk8

#chown grid:asmadmin /dev/rhdisk9

#chown grid:asmadmin /dev/rhdisk10

#chown grid:asmadmin /dev/rhdisk11

#chown grid:asmadmin /dev/rhdisk12

 

#chmod 660 /dev/rhdisk4

#chmod 660 /dev/rhdisk5

#chmod 660 /dev/rhdisk6

#chmod 660 /dev/rhdisk7

#chmod 660 /dev/rhdisk8

#chmod 660 /dev/rhdisk9

#chmod 660 /dev/rhdisk10

#chmod 660 /dev/rhdisk11

#chmod 660 /dev/rhdisk12

 

2.       修改物理卷屬性,共用存儲磁片的reserve_policy屬性需要是no,使用如下命令查看:

#lsattr -E -l hdisk4 | grep reserve_policy

#lsattr -E -l hdisk5 | grep reserve_policy

#lsattr -E -l hdisk6 | grep reserve_policy

#lsattr -E -l hdisk7 | grep reserve_policy

#lsattr -E -l hdisk8 | grep reserve_policy

#lsattr -E -l hdisk9 | grep reserve_policy

#lsattr -E -l hdisk10 | grepreserve_policy

#lsattr -E -l hdisk11 | grepreserve_policy

#lsattr -E -l hdisk12 | grepreserve_policy

 

如果需要修改reserve_policy屬性,使用如下命令:

#chdev -l hdisk4 -areserve_policy=no_reserve

#chdev -l hdisk5 -areserve_policy=no_reserve

#chdev -l hdisk6 -areserve_policy=no_reserve

#chdev -l hdisk7 -areserve_policy=no_reserve

#chdev -l hdisk8 -areserve_policy=no_reserve

#chdev -l hdisk9 -areserve_policy=no_reserve

#chdev -l hdisk10 -areserve_policy=no_reserve

#chdev -l hdisk11 -areserve_policy=no_reserve

#chdev -l hdisk12 -areserve_policy=no_reserve

3、每台主機的硬碟資訊

hdisk0          00f8e8092df611fa                   rootvg          active             

hdisk1          00f8e8082e4a46d5                  rootvg          active

hdisk2          00f8e80857a08edf                  appvg           active 

hdisk3          none                              None                               

#本地磁片,其中hdisk0hdisk1做成系統鏡像,hdisk2hdisk3做成鏡像用於應用安裝

hdisk4          none                                None                               

hdisk5          none                                None                               

hdisk6          none                                None 

#oracle OCRVoting盤,設置為正常冗餘                             

hdisk7          none                                None                                

hdisk8          none                                None                               

hdisk9          none                                None  

#oracle的資料盤,正常冗餘。                       

hdisk10         none                                None                                

hdisk11         none                                None                               

hdisk12         none                                None

#oracle的閃回以及歸檔盤,正常冗餘。

3.8.1 清除PVID

查看LUN,如果已經有了PVID的話,需要進行清除。

chdev -l hdisk2 -a pv=clear

重複同樣的操作,清除2-6所有LUNPVID

3.9 配置NTP服務(可選)

Oracle 11g R2提供Cluster Time SynchronizationServiceCTSS)集群時間同步服務,在沒有NTP服務時,該功能可以保證所有RAC節點的時間保持一致。ASM可以作為統一的存儲把Oracle Cluster RegistryOCR)和Voting disks統一安裝在ASM磁片上,不再需要單獨安裝集群檔案系統了,11g第二版也不再支援裸設備了(之前可以把集群件安裝到裸設備上)。還有一個功能SCANSingle Client Access Name)即單用戶端訪問名稱而且該功能包括了Failover故障自動切換功能,在訪問集群是只寫一個SCAN名稱就可以了,不需要象以前要把所有節點的VIP寫在應用程式的設定檔裡面了,這樣就大大方便了用戶端程式對RAC系統的訪問,但該功能需要DNS伺服器的支援。SCAN配置也可以採用hosts檔作解析。

如果系統組態了NTP服務,CTSS服務會處於觀察者模式,配置NTP具體步驟可參考AIX服務配置。

 

3.10 配置SSH

11.2,中,配置SSH需要作如下設置:
By default, OUI searches for SSH public keys in the directory /usr/local/etc/,and
ssh-keygen binaries in /usr/local/bin. However, on AIX, SSH public keys
typically are located in the path /etc/ssh, and ssh-keygen binaries are locatedin
the path /usr/bin. To ensure that OUI can set up SSH, use the following commandto
create soft links:
# ln -s /etc/ssh /usr/local/etc
# ln -s /usr/bin /usr/local/bin

配置root環境變數:
====================================================================
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

export AIXTHREAD_SCOPE=S

set -o vi
alias ll="ls -lrt"

3.10.1 SSH信任關係設置(可選)

SSH信任關係也可在grid安裝時選擇自動配置。

注意:Oracle11g R2 gridAIX上自動配置ssh時會報錯,因為Oracle調用的命令路徑和AIX系統上命令實際路徑不符,可以修改oracle安裝程式的sshsetup.sh腳本,或按照oracle調用路徑添加程式軟連接,具體路徑安裝過程中Oracle會提示。

3.10.1.1 首先在兩台機器上安裝好OpenSSH軟體;

具體安裝方法本處不詳述,需要下載opensshopenssl,安裝時需先安裝openssl,然後再安裝openssh

也可以通過AIX系統光碟,執行smitty install,選擇所有ssh包安裝。

安裝完畢後可以檢查:

# lslpp -l | grep ssh

3.10.1.2 然後在grid安裝中選擇自動配置SSH雙機信任關係

3.10.1.2.1 方法1

l  修改/etc/ssh/sshd_config

將:

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile      .ssh/authorized_keys

前面的注釋去掉。

l  利用命令:ssh-keygen生成key

全部選擇預設的就可以 , 生成的private keypublicKey會保存在 ~/.ssh目錄下 .

為了後面的訪問方便, passphrase一行密碼一般設置為空.

l  2台機器的public key互相傳給對方

可以有好幾種方法: ftp , rcp , scp都可以 .這裡我們通過FTP將兩個節點的~/.ssh下的id_rsaid_rsa.pub兩個檔分別拷下來傳至對方。由於同名,分別將其更改為id_rsa239id_rsa239.pubid_rsa237id_rsa237.pub,為了區分,後面加上其IP標識。

建立authorized_keys文件

由於上面修改了sshd_config , 其中一行為

AuthorizedKeysFile      .ssh/authorized_keys

為認證讀取檔的位置 .

我們採取預設的方式 , ~/.sshtouch一個authorized_keys.

touch authorized_keys

將傳輸過來的對方主機的pub key內容 ,追加到authorized_keys檔上,

Node1(192.168.0.204):

bash-3.00# cat id_rsa204.pub > authorized_keys

node2(192.168.0.205):

# cat id_rsa205.pub > authorized_keys

測試:

ssh 192.168.0.204

ssh 192.168.0.205

第一次登錄會出現提示,輸入yes後以後就不會了

3.10.1.2.2 方法2

以下兩個節點都執行:

#su – grid

$mkdir ~/.ssh

$chmod 700  ~/.ssh

$/usr/bin/ssh-keygen -t rsa

rac1:/home/grid$/usr/bin/ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_rsa):    

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

提示輸入密碼時,保持為空,直接回車即可。

 

以下只在節點1上執行:

$ touch ~/.ssh/authorized_keys

$ ssh rac1 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

$ ssh rac2 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

$ scp ~/.ssh/authorized_keys rac2:.ssh/authorized_keys

修改如下:

 

$ touch ~/.ssh/authorized_keys

$ ssh rac1 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

$ ssh rac2 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

$ scp ~/.ssh/authorized_keys rac2:.ssh/authorized_keys

 

 

以下只在節點2上執行:

$ chmod 600 ~/.ssh/authorized_keys

配置完成後按方法1中測試方法進行測試。

3.11 DNS配置(避免grid最後驗證報錯,可忽略)

#[/]mv/usr/bin/nslookup /usr/bin/nslookup.org

#[/]cat/usr/bin/nslookup

#!/usr/bin/sh

HOSTNAME=${1}

if[[ $HOSTNAME = "rx-cluster-scan" ]]; then

    echo "Server:         24.154.1.34"

    echo "Address:        24.154.1.34#53"

    echo "Non-authoritative answer:"

    echo "Name:   rx-cluster-scan"

    echo "Address: 1.1.1.11"  #假設1.1.1.1SCAN地址

else

    /usr/bin/nslookup.org $HOSTNAME

fi

 

注意:if you need to modify your SQLNET.ORA, ensure thatEZCONNECT is in the list if you specify the order of the naming methods usedfor client name resolution lookups (11gRelease 2 default is NAMES.DIRECTORY_PATH=(tnsnames, ldap, ezconnect)).

3.12 事先需要注意的事項

A、安裝11gR2 RAC要求必須配置ssh使用者對等性,以前配置rsh的方式現在已經無法通過安裝檢查。OUI中提供了自動配置ssh使用者對等性的按鈕,因此無需再事先手動配置。

 

 

需要注意的是:該功能完全針對Linux環境進行的開發,因此在AIX環境中,需要事先作如下操作:

ln -s /usr/bin/ksh/bin/bash

mkdir -p /usr/local/bin

ln -s /usr/bin/ssh-keygen/usr/local/bin/ssh-keygen

 

在配置對等性時,OUI會使用/bin/bash,而AIX默認是沒有bash的,因此需要將ksh軟連結到bash(當然你也可以安裝bash包)。

同樣,OUI會使用/usr/local/bin/ssh-keygen產生對等性金鑰,而AIX中在安裝了OpenSSH以後,ssh-keygen命令預設是存儲在/usr/bin中,因此也需要做link

 

B、在成功安裝完Grid Infrastructure之後,運行cluvf命令可能會報錯。

 

 

# cluvfy comp nodeapp -verbose

 

ERROR:

CRS is not installed on any of the nodes

Verification cannot proceed

 

並且,在碰到這樣的錯誤之後,也無法安裝RAC,會碰到如下錯誤:

 

 

[INS-35354] The system on which you areattempting to install Oracle RAC is not part of a valid cluster.

 

也就是無論是cluvf命令還是OUI,都認為這個機器上沒有安裝CRS,並不是在一個集群環境中。但是實際上運行crsctl check crs命令是完全正常的。

 

這個錯誤的解決方法可以參看MetalinkNote [ID 798203.1],大體上來說就是在安裝Grid Infrastructure的時候,inventory.xml檔中丟掉了CRS="true"字樣,這無疑是安裝程式的bug。需要手工detachHomeattachHome

 

4 安裝Oracle GridInfrastructure 11g R2

4.1 準備Grid Infrastructure安裝軟體

1.       將下載的p13390677_112040_AIX64-5L_3of7.zip壓縮包上傳到grid使用者的主目錄中。

2.       p13390677_112040_AIX64-5L_3of7.zip解壓到當前資料夾:

#cd /home/grid

#unzip p13390677_112040_AIX64-5L_3of7.zip

如果沒有安裝unzip包也可以用jar解壓。

#jar -xvfp13390677_112040_AIX64-5L_1of7.zip

3.       修改解壓後的資料夾grid的許可權:

#chown -R grid:oinstall/home/grid/grid

4.2 使用CVU腳本校驗系統是否滿足安裝需求

安裝Oracle RAC環境需要多個步驟。硬體、OS、集群軟體、資料庫軟體應按照順序來安裝。每一個步驟所包含的重要元件都是成功安裝不可缺少的。Oracle提供了一個工具CVUCluster Verification Utility)用於在Oracle RAC的安裝過程中驗證系統是否滿足安裝需求。

1.       grid使用者登錄系統,確認目前的目錄為grid用戶家目,即使用pwd命令輸出的結果為:

#pwd                    //命令執行結果為"/home/grid/"

#cd grid                 //進入到安裝程式根目錄

2.       執行CVU腳本校驗系統,並將檢查結果輸出到report.txt文字檔中。

#./runcluvfy.sh stage -precrsinst -n rac1,rac2 -fixup -verbose >report.txt

3.       可以使用如下命令查看分析 report.txt文件:

#cat report.txt|grep failed

4.       將安裝介質grid目錄下rootpre.sh拷貝到所有節點grid使用者的家目錄下,root使用者執行rootpre.sh所有節點

#scp -r /home/grid/grid/rootpre/root@192.168.0.205:/home/grid/rootpre/

#./ rootpre.sh

4.3 開始安裝Grid Infrastructure

1.       首先在宿主機上安裝Xmanager軟體,並在宿主機上打開一個"Xmanager - Passive"會話進程。

2.       在宿主機上以grid用戶SSH遠端連接到Linux主機,輸入如下命令:xclock,驗證圖形介面是否可以正常在本地顯示。如果可以正常顯示一個"鐘錶"圖形(如下圖),請繼續後續的步驟,如果不能正常顯示,請檢查排錯。

3.       SSH會話中,切換到grid安裝目錄下,執行安裝腳本開啟grid安裝程式。

./runInstaller

#su - grid

rac1:/home/grid$exportDISPLAY=172.1.165.172:0.0

rac1:/home/grid$/u01/soft/grid/runInstaller

********************************************************************************

 

Yourplatform requires the root user to perform certain pre-installation

OSpreparation.  The root user should runthe shell script 'rootpre.sh' before

youproceed with Oracle installation. rootpre.sh can be found at the top level

ofthe CD or the stage area.

 

Answer'y' if root has run 'rootpre.sh' so you can proceed with Oracle

installation.

Answer'n' to abort installation and then ask root to run 'rootpre.sh'.

 

********************************************************************************

 

Has'rootpre.sh' been run by root on all nodes? [y/n] (n)

y

 

StartingOracle Universal Installer...

 

CheckingTemp space: must be greater than 190 MB.  Actual 9516 MB    Passed

Checkingswap space: must be greater than 150 MB.  Actual 9216 MB    Passed

Checkingmonitor: must be configured to display at least 256 colors.    Actual 16

777216    Passed

Preparingto launch Oracle Universal Installer from /tmp/OraInstall2014-01-03_11-

05-23PM.Please wait ...rac1:/home/grid$

4.       彈出OUI主介面,選擇"Skipsoftware updates",點擊"Next"

5.       選擇"Install and Configure Oracle Grid Infrastructure for a Cluster",點擊"Next"

6.       選擇"Advanced Installation",點擊"Next"

7.       選擇添加"Simplified Chinese""Selected Langusges",點擊"Next"

8.       輸入相關配置資訊,如下圖所示,點擊"Next"

9.       點擊"Add",增加一個網格節點rac2,具體配置資訊如下圖,點擊"OK",之後點擊"Next"。如果前面沒有配置"SSH互信",可以在此步配置。

10.   OUI安裝程式會自動區分PublicPrivate網路,點擊"Next"

11.   選擇"Oracle ASM"存儲,點擊"Next"

12.   /dev/rhdisk2/dev/rhdisk3/dev/rhdisk4加到磁片組"OCR_VOTE"中,選擇"Normal"冗餘,AU大小為1M,點擊"Next"

13.   選擇"Use same passwords for these accounts",輸入密碼"Abc560647",點擊"Next"

14.   指定ASM管理相關組資訊,點擊"Next"

15.   指定Oracle基目錄和軟體安裝位置,點擊"Next"

16.   指定Oracle軟體安裝清單目錄位置,點擊"Next"

17.   執行安裝條件檢查。

18.   安裝條件檢查合格後,彈出配置資訊匯總情況,點擊"Install"

19.   OUI開始安裝grid infrastructure 軟體。

20.   安裝過程中會彈出如下執行腳本提示框,以root用戶分別在每個節點執行提示框中的腳本,執行完成後點擊"OK"注意:節點rac1執行之後,才可在rac2節點執行。

第一個節點的執行資訊如下:

#/u01/app/oraInventory/orainstRoot.sh

Changing permissionsof /u01/app/oraInventory.

Adding read,writepermissions for group.

Removingread,write,execute permissions for world.

 

Changing groupnameof /u01/app/oraInventory to dba.

The execution of thescript is complete.

#/u01/app/11.2/grid/root.sh

Performing root useroperation for Oracle 11g

 

The followingenvironment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2/grid

 

Enter the fullpathname of the local bin directory: [/usr/local/bin]:

The contents of"dbhome" have not changed. No need to overwrite.

The contents of"oraenv" have not changed. No need to overwrite.

The contents of"coraenv" have not changed. No need to overwrite.

 

 

Creating /etc/oratabfile...

Entries will beadded to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finished runninggeneric part of root script.

Now product-specificroot actions will be performed.

Using configurationparameter file: /u01/app/11.2/grid/crs/install/crsconfig_params

Creating trace directory

User ignoredPrerequisites during installation

Installing TraceFile Analyzer

User grid has therequired capabilities to run CSSD in realtime mode

OLR initialization -successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding Clusterwareentries to inittab

CRS-2672: Attemptingto start 'ora.mdnsd' on 'rac1'

CRS-2676: Start of'ora.mdnsd' on 'rac1' succeeded

CRS-2672: Attemptingto start 'ora.gpnpd' on 'rac1'

CRS-2676: Start of'ora.gpnpd' on 'rac1' succeeded

CRS-2672: Attemptingto start 'ora.cssdmonitor' on 'rac1'

CRS-2672: Attemptingto start 'ora.gipcd' on 'rac1'

CRS-2676: Start of'ora.cssdmonitor' on 'rac1' succeeded

CRS-2676: Start of'ora.gipcd' on 'rac1' succeeded

CRS-2672: Attemptingto start 'ora.cssd' on 'rac1'

CRS-2672: Attemptingto start 'ora.diskmon' on 'rac1'

CRS-2676: Start of'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of'ora.cssd' on 'rac1' succeeded

 

ASM created andstarted successfully.

 

Disk Group CRSDGcreated successfully.

 

clscfg: -installmode specified

Successfullyaccumulated necessary OCR keys.

Creating OCR keysfor user 'root', privgrp 'system'..

Operationsuccessful.

CRS-4256: Updatingthe profile

Successful additionof voting disk a58239b181b14f03bff383940a72cbe9.

Successful additionof voting disk 12931f422fe74fd6bf2721d63a02f639.

Successful additionof voting disk 6f7ee1cbbe6a4ff1bf3a1b097a00deb7.

Successfullyreplaced voting disk group with +CRSDG.

CRS-4256: Updating theprofile

CRS-4266: Votingfile(s) successfully replaced

##  STATE   File Universal Id               File Name Disk group

--  -----   -----------------               --------- ---------

 1. ONLINE  a58239b181b14f03bff383940a72cbe9 (/dev/rhdisk4) [CRSDG]

 2. ONLINE  12931f422fe74fd6bf2721d63a02f639 (/dev/rhdisk5) [CRSDG]

 3. ONLINE  6f7ee1cbbe6a4ff1bf3a1b097a00deb7 (/dev/rhdisk6) [CRSDG]

Located 3 votingdisk(s).

CRS-2672: Attemptingto start 'ora.asm' on 'rac1'

CRS-2676: Start of'ora.asm' on 'rac1' succeeded

CRS-2672: Attemptingto start 'ora.CRSDG.dg' on 'rac1'

CRS-2676: Start of'ora.CRSDG.dg' on 'rac1' succeeded

Configure OracleGrid Infrastructure for a Cluster ... succeeded

第二個節點的執行資訊如下:

#/u01/app/oraInventory/orainstRoot.sh

Changing permissionsof /u01/app/oraInventory.

Adding read,writepermissions for group.

Removingread,write,execute permissions for world.

 

Changing groupnameof /u01/app/oraInventory to dba.

The execution of thescript is complete.

#/u01/app/11.2/grid/root.sh

Performing root useroperation for Oracle 11g

 

The followingenvironment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2/grid

 

Enter the fullpathname of the local bin directory: [/usr/local/bin]:

The contents of"dbhome" have not changed. No need to overwrite.

The contents of"oraenv" have not changed. No need to overwrite.

The contents of"coraenv" have not changed. No need to overwrite.

 

 

Creating /etc/oratabfile...

Entries will beadded to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finished runninggeneric part of root script.

Now product-specificroot actions will be performed.

Using configurationparameter file: /u01/app/11.2/grid/crs/install/crsconfig_params

Creating tracedirectory

User ignoredPrerequisites during installation

Installing TraceFile Analyzer

User grid has therequired capabilities to run CSSD in realtime mode

OLR initialization -successful

Adding Clusterwareentries to inittab

CRS-4402: The CSSdaemon was started in exclusive mode but found an active CSS daemon on node rac1,number 1, and is terminating

An active clusterwas found during exclusive startup, restarting to join the cluster

Configure OracleGrid Infrastructure for a Cluster ... succeeded

21.   安裝過程繼續執行,完成安裝任務後點擊"Close"

22.   安裝完成之後以grid用戶執行如下命令校驗gridinfrastructure安裝:

cluvfy stage -post crsinst -n rac1,rac2

分析輸出結果,看看grid infrastructure是否安裝成功。

23.   grid用戶執行如下命令查看gridinfrastructure安當前工作狀態:

[grid@rac1 ~]$ crsctl check crs               //檢查CRS整體狀態

[grid@rac1 ~]$ crsctl check cluster -all         //檢查CRS在各個節點的狀態

[grid@rac1 ~]$ crsctl stat res -t(或者crs_stat -t -v10g命令】)   //檢查CRS資源狀態

[grid@rac1 ~]$ olsnodes -n           //檢查集群節點數

5 安裝OracleDatabase 11g R2

5.1 準備DataBase安裝軟體

1.       p13390677_112040_AIX64-5L_1of7.zipp13390677_112040_AIX64-5L_2of7.zip上傳到oracle使用者的家目錄中。

2.       p13390677_112040_AIX64-5L_1of7.zipp13390677_112040_AIX64-5L_2of7.zip解壓到當前資料夾(root用戶執行):

#cd /home/oracle

#unzip p13390677_112040_AIX64-5L_1of7.zip

#unzip p13390677_112040_AIX64-5L_2of7.zip

3.       修改解壓後的資料夾database的許可權:

#chown -R oracle:oinstall/home/oracle/database

5.2 使用cluvfy腳本校驗系統是否滿足安裝需求

在安裝oracle資料庫之前,cluvfy腳本工具還沒有安裝,所以oracle使用者需要調用grid使用者目錄下的cluvfy工具進行校驗,具體步驟如下:

1.       oracle使用者登錄系統,切換目前的目錄為/u01/11.2.0/grid/bin

#cd /u01/app/11.2.0/grid/bin

2.       執行cluvfy腳本校驗系統,並將檢查結果輸出到oracle使用者家目錄下的report.txt文字檔中。

#./cluvfy stage -pre dbinst -n rac1,rac2>/home/oracle/report.txt

3.       切換回oracle家目錄並使用如下命令查看分析report.txt文件:

#cd                           //切換回oracle使用者家目錄

#cat report.txt|grep failed         //分析校驗腳本輸出的內容

5.3 開始安裝DataBase

1.       oracle使用者SSH登錄rac1節點系統,打開一個命令列終端,切換目前的目錄為/home/oracle/database

2.       在命令列終端中輸入如下命令,開啟oracle database安裝程式。

./runInstaller

3.       彈出OUI安裝程式,取消"安全更新通知配置",點擊"Next"

4.       接著彈出一個錯誤提示框,提示你沒有指定帳戶和郵寄地址,點擊"Yes"忽略。

5.       選擇"Skip software updates",點擊"Next"

6.       選擇"Install database software only",點擊"Next"

7.       選擇如下配置,點擊"Next"

8.       添加"中文"支援,點擊"Next"

9.       選擇安裝"企業版",點擊"Next"

10.   選擇Oraac了基目錄和軟體安裝位置,點擊"Next"

11.   選擇作業系統組,點擊"Next"

12.   執行安裝條件檢查。

13.   執行安裝條件檢查時可能出現以下如圖的錯誤:

ERRORAn internal error occurred withincluster verification framework

Unable to obtain network interface list fromOracle ClusterwarePRCT-1011 : Failed to run "oifcfg". Detailed error:null

可以採用以下方法解決:

一、可能是OCR中記錄的網路設置不正確,那可以參考以下處理。

su - root 

/u01/app/11.2.0/grid/bin/ocrdump/tmp/dump.ocr1 

 

grep 'css.interfaces' /tmp/dump.ocr1 | awk-F \] '{print $1}' | awk -F \. '{print $5}' | sort -u 

 

/u01/app/11.2/grid/bin/oifcfg delif -globalen6 -force 

/u01/app/11.2/grid/bin/oifcfg delif -globalen7 -force 

su - grid 

/u01/app/11.2/grid/bin/oifcfg iflist -p-n 

 

$ /u01/app/11.2/grid/bin/oifcfg iflist -p-n 

en8 10.1.0.0  PUBLIC  255.255.255.128 

en9 192.168.0.0  PUBLIC  255.255.255.0 

 

#su - grid 

 

/u01/app/11.2/grid/bin/oifcfg setif -globalen9/192.168.0.0:cluster_interconnect 

/u01/app/11.2/grid/bin/oifcfg setif -globalen8/10.1.0.0:public 

#su - grid 

/u01/app/11.2/grid/bin/oifcfg getif 

 

$ /u01/app/11.2/grid/bin/oifcfg getif 

en8 192.168.0.0 global cluster_interconnect 

en9 10.1.0.0  global  public 

二、是因為環境變數的問題:

#su - oracle 

$unset ORA_NLS10 

或者修改ORA_NLS10變數正確,指向export ORA_NLS10=$GRID_HOME/nls/data

 

這個時候重新的運行./runInstaller

#    關於這個報錯,可以參考MOS文章:

11gR2 OUI On AIX Pre-Requisite Check GivesError "Patch IZ97457, IZ89165 Are Missing" [ID 1439940.1]

 

大意是說在不同的TL級別,補丁號會變.該文章給出了IZ97457IZ89165在各個TL中所對應的補丁號,所以,你只要打了相應的補丁,那就完全可以無視報錯了.

 

Below are the equivalent APAR's for eachspecific TL:

** Patch IZ89165 **

6100-03 - use AIX APAR IZ89304

6100-04 - use AIX APAR IZ89302

6100-05 - use AIX APAR IZ89300

6100-06 - use AIX APAR IZ89514

7100-00 - use AIX APAR IZ89165

** Patch IZ97457 **

5300-11 - use AIX APAR IZ98424

5300-12 - use AIX APAR IZ98126

6100-04 - use AIX APAR IZ97605

6100-05 - use AIX APAR IZ97457

6100-06 - use AIX APAR IZ96155

7100-00 - use AIX APAR IZ97035

 

查看當前作業系統版本:

# oslevel -s

6100-06-08-1216

#

查看補丁應用:

# instfix -i -k IZ89514

   All filesets for IZ89514 were found.

#

# instfix -i -k IZ96155

   All filesets for IZ96155 were found.

#

 

14.   安裝條件檢查通過,則顯示安裝配置匯總資訊,點擊"Install"

15.   OUI開始安裝DataBase,如圖。

16.   安裝過程中會彈出如下執行腳本提示框,以root用戶分別在每個節點執行提示框中的腳本,執行完成後點擊"OK"。注意:節點RAC1執行之後,才可在RAC2節點執行。

17.   點擊"Close",結束OracleDataBase軟體安裝。

6 創建Oracle RAC集群資料庫

在本次安裝配置中使用ASM存儲資料庫檔,由於在安裝grid infrastructure時,已經創建了一個ASM磁片組OCR_VOTE,在這裡我們新建DATA磁片組來存儲我們的資料庫檔。Oracle官方建議將Oracle集群檔(OCRvoting_disk)和資料庫檔放在一個磁片組上。在這個生產環境中需要使用快速恢復區和歸檔配置,所以還需要創建一個用於存放快速恢復區檔和歸檔檔的磁片組FRA_ARCHIVE

6.1 創建ASM磁片組

1.       grid用戶在SSH登錄rac1節點,打開一個命令列終端,輸入asmca命令。

2.       彈出如下ASM管理介面,點擊"Create"按鈕。

 

 

3.       彈出新建ASM磁片組對話方塊,輸入如下圖所示資訊,點擊"OK"

4.       過一會就會彈出如下創建成功提示框,點擊"OK"

5.       DATA磁片組創建成功之後的顯示介面如下圖,接下來創建用於閃回恢復區的FRA_ARCHIVE磁片組,點擊"Create"

 

6.       彈出新建ASM磁片組對話方塊,輸入如下圖所示資訊,點擊"OK"

 

 

7.       過一會就會彈出如下創建成功提示框,點擊"OK"

8.       FRA_ARCHIVE磁片組創建成功之後的顯示介面如下圖,點擊"Exit",彈出提示框,點擊"Yes"

6.2 使用DBCA創建RAC資料庫

1.       oracle用戶XmanagerXbrower登錄rac1節點,打開一個命令列終端,輸入dbca命令。

2.       彈出如下創建資料庫嚮導,選擇Oracle RAC資料庫,點擊"Next"

3.       選擇創建資料庫,點擊"Next"

4.       選擇"自訂資料庫",點擊"Next"

5.       輸入集群資料庫的SID和全域名稱,並選擇在所有節點上創建集群資料庫,點擊"Next"

6.       這一步選擇創建DataBase Control,並啟用自動管理任務,點擊"Next"

7.       為所有帳戶使用相同密碼:changanjie,點擊"Next"

8.       選擇存儲類型為ASM,資料區為DATA磁片組,點擊"Next"

在這一步如果沒有找到ASM磁片需要檢查,oracle使用者兩個節點的組別是否一致。

9.       此時會彈出ASMSNMP管理帳戶密碼,密碼為4.3節中第12步中指定的密碼:dragonsoft,點擊"OK"

10.   指定快速恢復區的資料存放區域為FRA_ARCHIVE磁片組,並啟用歸檔,點擊"Next"

 

11.   資料庫元件選擇預設,沒有自訂腳本,點擊"Next"

12.   在這一步中需修改字元編碼,process數為500,別的標籤頁上的參數均默認,點擊"Next"

 

13.   彈出創建資料資料檔案的詳細資訊,這裡可以修改線上日誌組個數,表空間大小等參數,點擊"Next"

14.   選擇生成資料庫腳本,點擊"Finish"

15.   彈出一個DBCA匯總資訊,點擊"OK"

16.   DBCA首先會生成創建資料庫的腳本並保存到指定目錄,成功生成腳本後,會彈出一個提示框,點擊"OK",即可開始創建資料庫。

17.   創建資料庫過程如圖所示。

18.   資料庫創建進程完成之後,會彈出如下密碼管理介面,點擊"Exit"退出。

19.   至此Oracle RAC 集群資料庫創建完成。

7 Oracle RAC集群資料庫的簡單管理

       grid用戶查看集群節點個數:

              #olsnodes-n

       grid使用者查看集群狀態:

              #crsctlstat res -t(或者crs_stat -t-v)

       oracle使用者關閉資料庫:

              #sqlplus/ as sysdba

              SQL>shutdownimmediate

              SQL>exit

Srvctl stop database –d  dbid

       root用戶關閉集群:

              #/u01/app/11.2.0/grid/bin/crsctlstop crs

crsctl stop cluster -all

       root使用者關閉作業系統:

              shutdown-F

       root用戶啟動集群:

              #/u01/app/11.2.0/grid/bin/crsctlstart crs

       oracle使用者啟動資料庫:

              #sqlplus/ as sysdba

              SQL>startup

              SQL>exit

8 附錄

8.1Uninstall/Remove 11.2.0.2 Grid Infrastructure & Database inLinux

出於研究或者測試的目的我們可能已經在平臺上安裝了11gR2GridInfrastructureRACDatabase,因為GI部署的特殊性我們不能直接刪除CRS_HOME和一些列腳本的方法來卸載GIRAC Database軟體,所幸在11gR2Oracle提供了卸載軟體的新特性:Deinstall,通過執行Deinstall腳本可以方便地刪除Oracle軟體產品在系統上的各類設定檔。

具體的卸載步驟如下:

1. 將平臺上現有的資料庫遷移走或者物理、邏輯地備份,如果該資料庫已經沒有任何價值的話使用DBCA刪除該資料庫及相關服務。

oracle使用者登錄系統啟動DBCA介面,並選擇RACdatabase:

[oracle@rac2~]$ dbca

step 1 of 2 :operations上選擇刪除資料庫delete a Database

 step 2 of 2 : List of clusterdatabases上選擇所要刪除的資料庫

逐一刪除Cluster環境中所有的Database

2.
使用oracle使用者登錄任意節點並執行$ORACLE_HOME/deinstall目錄下的deinstall腳本

 

SQL> select * from v$version;

 

BANNER

--------------------------------------------------------------------------------

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production

PL/SQL Release 11.2.0.2.0 - Production

CORE    11.2.0.2.0      Production

TNS for Linux: Version 11.2.0.2.0 - Production

NLSRTL Version 11.2.0.2.0 - Production

 

SQL> select * from global_name;

 

GLOBAL_NAME

--------------------------------------------------------------------------------

www.oracledatabase12g.com

 

 

[root@rac2 ~]# su - oracle

 

[oracle@rac2 ~]$ cd $ORACLE_HOME/deinstall

 

[oracle@rac2 deinstall]$ ./deinstall

 

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /g01/oraInventory/logs/

 

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

 

######################### CHECK OPERATION START #########################

Install check configuration START

 

Checking for existence of the Oracle home location /s01/orabase/product/11.2.0/dbhome_1

Oracle Home type selected for de-install is: RACDB

Oracle Base selected for de-install is: /s01/orabase

Checking for existence of central inventory location /g01/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /g01/11.2.0/grid

The following nodes are part of this cluster: rac1,rac2

 

Install check configuration END

 

Skipping Windows and .NET products configuration check

 

Checking Windows and .NET products configuration END

 

Network Configuration check config START

 

Network de-configuration trace file location:

/g01/oraInventory/logs/netdc_check2011-08-31_11-19-25-PM.log

 

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [CRS_LISTENER]:

 

Network Configuration check config END

 

Database Check Configuration START

 

Database de-configuration trace file location: /g01/oraInventory/logs/databasedc_check2011-08-31_11-19-39-PM.log

 

Use comma as separator when specifying list of values as input

 

Specify the list of database names that are configured in this Oracle home []:

Database Check Configuration END

 

Enterprise Manager Configuration Assistant START

 

EMCA de-configuration trace file location: /g01/oraInventory/logs/emcadc_check2011-08-31_11-19-46-PM.log

 

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /g01/oraInventory/logs//ocm_check131.log

Oracle Configuration Manager check END

 

######################### CHECK OPERATION END #########################

 

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /g01/11.2.0/grid

The cluster node(s) on which the Oracle home de-installation will be performed are:rac1,rac2

Oracle Home selected for de-install is: /s01/orabase/product/11.2.0/dbhome_1

Inventory Location where the Oracle home registered is: /g01/oraInventory

Skipping Windows and .NET products configuration check

Following RAC listener(s) will be de-configured: CRS_LISTENER

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

rac1 : Oracle Home exists with CCR directory, but CCR is not configured

rac2 : Oracle Home exists with CCR directory, but CCR is not configured

CCR check is finished

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2011-08-31_11-19-23-PM.out'

Any error messages from this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2011-08-31_11-19-23-PM.err'

 

######################## CLEAN OPERATION START ########################

 

Enterprise Manager Configuration Assistant START

 

EMCA de-configuration trace file location: /g01/oraInventory/logs/emcadc_clean2011-08-31_11-19-46-PM.log

 

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /g01/oraInventory/logs/databasedc_clean2011-08-31_11-20-00-PM.log

 

Network Configuration clean config START

 

Network de-configuration trace file location: /g01/oraInventory/logs/netdc_clean2011-08-31_11-20-00-PM.log

 

De-configuring RAC listener(s): CRS_LISTENER

 

De-configuring listener: CRS_LISTENER

    Stopping listener: CRS_LISTENER

    Listener stopped successfully.

    Unregistering listener: CRS_LISTENER

    Listener unregistered successfully.

Listener de-configured successfully.

 

De-configuring Listener configuration file on all nodes...

Listener configuration file de-configured successfully.

 

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

 

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

 

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

 

De-configuring backup files on all nodes...

Backup files de-configured successfully.

 

The network configuration has been cleaned up successfully.

 

Network Configuration clean config END

 

Oracle Configuration Manager clean START

OCM clean log file location : /g01/oraInventory/logs//ocm_clean131.log

Oracle Configuration Manager clean END

Removing Windows and .NET products configuration END

Oracle Universal Installer clean START

 

Detach Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

 

Delete directory '/s01/orabase/product/11.2.0/dbhome_1' on the local node : Done

 

Delete directory '/s01/orabase' on the local node : Done

 

Detach Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the remote nodes 'rac1' : Done

 

Delete directory '/s01/orabase/product/11.2.0/dbhome_1' on the remote nodes 'rac1' : Done

 

Delete directory '/s01/orabase' on the remote nodes 'rac1' : Done

 

Oracle Universal Installer cleanup was successful.

 

Oracle Universal Installer clean END

 

Oracle install clean START

 

Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-19-18PM' on node 'rac2'

Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-19-18PM' on node 'rac1'

 

Oracle install clean END

 

######################### CLEAN OPERATION END #########################

 

####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: CRS_LISTENER

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR configuration

CCR clean is finished

Skipping Windows and .NET products configuration clean

Successfully detached Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the local node.

Successfully deleted directory '/s01/orabase/product/11.2.0/dbhome_1' on the local node.

Successfully deleted directory '/s01/orabase' on the local node.

Successfully detached Oracle home '/s01/orabase/product/11.2.0/dbhome_1' from the central inventory on the remote nodes 'rac1'.

Successfully deleted directory '/s01/orabase/product/11.2.0/dbhome_1' on the remote nodes 'rac1'.

Successfully deleted directory '/s01/orabase' on the remote nodes 'rac1'.

Oracle Universal Installer cleanup was successful.

 

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

 

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

以上deinstall腳本會刪除所有節點上的$ORACLE_HOME下的RDBMS軟體,並從centralinventory中將已經卸載的RDBMS軟體登出,注意這種操作是不可逆的!

3.

使用root用戶登錄在所有節點上注意運行"$ORA_CRS_HOME/crs/install/rootcrs.pl-verbose -deconfig -force"的命令,注意在最後一個節點不要運行該命令。舉例來說如果你有2個節點的話,就只要在一個節點上運行上述命令即可:

[root@rac1 ~]# $ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force

 

Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params

Network exists: 1/172.1.1.0/255.255.255.0/eth0, type static

VIP exists: /rac1-vip/172.1.1.206/172.1.1.0/255.255.255.0/eth0, hosting node rac1

VIP exists: /rac2-vip/172.1.1.207/172.1.1.0/255.255.255.0/eth0, hosting node rac2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

ACFS-9200: Supported

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'

CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'rac1'

CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.oc4j' on 'rac2'

CRS-2676: Start of 'ora.oc4j' on 'rac2' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded

CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'rac1' succeeded

CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac1'

CRS-2673: Attempting to stop 'ora.diskmon' on 'rac1'

CRS-2677: Stop of 'ora.diskmon' on 'rac1' succeeded

CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

4.在最後的節點(lastnode)root用戶執行"$ORA_CRS_HOME/crs/install/rootcrs.pl-verbose -deconfig -force -lastnode"命令,該命令會清空OCRVotedisk :

[root@rac2 ~]# $ORA_CRS_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

 

Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params

CRS resources for listeners are still configured

Network exists: 1/172.1.1.0/255.255.255.0/eth0, type static

VIP exists: /rac1-vip/172.1.1.206/172.1.1.0/255.255.255.0/eth0, hosting node rac1

VIP exists: /rac2-vip/172.1.1.207/172.1.1.0/255.255.255.0/eth0, hosting node rac2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

ACFS-9200: Supported

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'

CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2'

CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded

CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'rac2' succeeded

CRS-2677: Stop of 'ora.FRA.dg' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'

CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'

CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.diskmon' on 'rac2'

CRS-2677: Stop of 'ora.diskmon' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rac2'

CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'

CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded

CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded

CRS-4611: Successful deletion of voting disk +SYSTEMDG.

ASM de-configuration trace file location: /tmp/asmcadc_clean2011-08-31_11-55-52-PM.log

ASM Clean Configuration START

ASM Clean Configuration END

 

ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2011-08-31_11-55-52-PM.log for details.

 

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'

CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'

CRS-2673: Attempting to stop 'ora.diskmon' on 'rac2'

CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'

CRS-2677: Stop of 'ora.diskmon' on 'rac2' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

5.在任意節點以GridInfrastructure擁有者用戶執行"$ORA_CRS_HOME/deinstall/deinstall"腳本:

[root@rac1 ~]# su - grid

[grid@rac1 ~]$ cd $ORA_CRS_HOME

[grid@rac1 grid]$ cd deinstall/

 

[grid@rac1 deinstall]$ cat deinstall

#!/bin/sh

#

# $Header: install/utl/scripts/db/deinstall /main/3 2010/05/28 20:12:57 ssampath Exp $

#

# Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.

#

#    NAME

#      deinstall - wrapper script that calls deinstall tool.

#

#    DESCRIPTION

#      This script will set all the necessary variables and call the tools

#      entry point.

#

#    NOTES

#

#

#    MODIFIED   (MM/DD/YY)

#    mwidjaja    04/29/10 - XbranchMerge mwidjaja_bug-9579184 from

#                           st_install_11.2.0.1.0

#    mwidjaja    04/15/10 - Added SHLIB_PATH for HP-PARISC

#    mwidjaja    01/14/10 - XbranchMerge mwidjaja_bug-9269768 from

#                           st_install_11.2.0.1.0

#    mwidjaja    01/14/10 - Fix help message for params

#    ssampath    12/24/09 - Fix for bug 9227535. Remove legacy version_check

#                           function

#    ssampath    12/01/09 - XbranchMerge ssampath_bug-9167533 from

#                           st_install_11.2.0.1.0

#    ssampath    11/30/09 - Set umask to 022.

#    prsubram    10/12/09 - XbranchMerge prsubram_bug-9005648 from main

#    prsubram    10/08/09 - Compute ARCHITECTURE_FLAG in the script

#    prsubram    09/15/09 - Setting LIBPATH for AIX

#    prsubram    09/10/09 - Add AIX specific code check java version

#    prsubram    09/10/09 - Change TOOL_DIR to BOOTSTRAP_DIR in java cmd

#                           invocation of bug#8874160

#    prsubram    09/08/09 - Change the default shell to /usr/xpg4/bin/sh on

#                           SunOS

#    prsubram    09/03/09 - Removing -d64 for client32 homes for the bug8859294

#    prsubram    06/22/09 - Resolve port specific id cmd issue

#    ssampath    06/02/09 - Fix for bug 8566942

#    ssampath    05/19/09 - Move removal of /tmp/deinstall to java

#                           code.

#    prsubram    04/30/09 - Fix for the bug#8474891

#    mwidjaja    04/29/09 - Added user check between the user running the

#                           script and inventory owner

#    ssampath    04/29/09 - Changes to make error message better when deinstall

#                           tool is invoked from inside ORACLE_HOME and -home

#                           is passed.

#    ssampath    04/15/09 - Fix for bug 8414555

#    prsubram    04/09/09 - LD_LIBRARY_PATH is ported for sol,hp-ux & aix

#    mwidjaja    03/26/09 - Disallow -home for running from OH

#    ssampath    03/24/09 - Fix for bug 8339519

#    wyou        02/25/09 - restructure the ohome check

#    wyou        02/25/09 - change the error msg for directory existance check

#    wyou        02/12/09 - add directory existance check

#    wyou        02/09/09 - add the check for the writablity for the oracle

#                           home passed-in

#    ssampath    01/21/09 - Add oui/lib to LD_LIBRARY_PATH

#    poosrini    01/07/09 - LOG related changes

#    ssampath    11/24/08 - Create /main/osds/unix branch

#    dchriste    10/30/08 - eliminate non-generic tools like 'cut'

#    ssampath    08/18/08 - Pickup srvm.jar from JLIB directory.

#    ssampath    07/30/08 - Add http_client.jar and OraCheckpoint.jar to

#                           CLASSPATH

#    ssampath    07/08/08 - assistantsCommon.jar and netca.jar location has

#                           changed.

#    ssampath    04/11/08 - If invoking the tool from installed home, JRE_HOME

#                           should be set to $OH/jdk/jre.

#    ssampath    04/09/08 - Add logic to instantiate ORA_CRS_HOME, JAVA_HOME

#                           etc.,

#    ssampath    04/03/08 - Pick up ldapjclnt11.jar

#    idai        04/03/08 - remove assistantsdc.jar and netcadc.jar

#    bktripat    02/23/07 -

#    khsingh     07/18/06 - add osdbagrp fix

#    khsingh     07/07/06 - fix regression

#    khsingh     06/20/06 - fix bug 5228203

#    bktripat    06/12/06 - Fix for bug 5246802

#    bktripat    05/08/06 -

#    khsingh     05/08/06 - fix tool to run from any parent directory

#    khsingh     05/08/06 - fix LD_LIBRARY_PATH to have abs. path

#    ssampath    05/01/06 - Fix for bug 5198219

#    bktripat    04/21/06 - Fix for bug 5074246

#    khsingh     04/11/06 - fix bug 5151658

#    khsingh     04/08/06 - Add WA for bugs 5006414 & 5093832

#    bktripat    02/08/06 - Fix for bug 5024086 & 5024061

#    bktripat    01/24/06 -

#    mstalin     01/23/06 - Add lib to pick libOsUtils.so

#    bktripat    01/19/06 - adding library changes

#    rahgupta    01/19/06 -

#    bktripat    01/19/06 -

#    mstalin     01/17/06 - Modify the assistants deconfig jar file name

#    rahgupta    01/17/06 - updating emcp classpath

#    khsingh     01/17/06 - export ORACLE_HOME

#    khsingh     01/17/06 - fix for CRS deconfig.

#    hying       01/17/06 - netcadc.jar

#    bktripat    01/16/06 -

#    ssampath    01/16/06 -

#    bktripat    01/11/06 -

#    clo         01/10/06 - add EMCP entries

#    hying       01/10/06 - netcaDeconfig.jar

#    mstalin     01/09/06 - Add OraPrereqChecks.jar

#    mstalin     01/09/06 -

#    khsingh     01/09/06 -

#    mstalin     01/09/06 - Add additional jars for assistants

#    ssampath    01/09/06 - removing parseOracleHome temporarily

#    ssampath    01/09/06 -

#    khsingh     01/08/06 - fix for CRS deconfig

#    ssampath    12/08/05 - added java version check

#    ssampath    12/08/05 - initial run,minor bugs fixed

#    ssampath    12/07/05 - Creation

#

 

#MACROS

 

if [ -z "$UNAME" ]; then UNAME="/bin/uname"; fi

if [ -z "$ECHO" ]; then ECHO="/bin/echo"; fi

if [ -z "$AWK" ]; then AWK="/bin/awk"; fi

if [ -z "$ID" ]; then ID="/usr/bin/id"; fi

if [ -z "$DIRNAME" ]; then DIRNAME="/usr/bin/dirname"; fi

if [ -z "$FILE" ]; then FILE="/usr/bin/file"; fi

 

if [ "`$UNAME`" = "SunOS" ]

then

    if [ -z "${_xpg4ShAvbl_deconfig}" ]

    then

        _xpg4ShAvbl_deconfig=1

        export _xpg4ShAvbl_deconfig

        /usr/xpg4/bin/sh $0 "$@"

        exit $?

    fi

        AWK="/usr/xpg4/bin/awk"

fi

 

# Set umask to 022 always.

umask 022

 

INSTALLED_VERSION_FLAG=true

ARCHITECTURE_FLAG=64

 

TOOL_ARGS=$* # initialize this always.

 

# Since the OTN and the installed version of the tool is same, only way to

# differentiate is through the instantated variable ORA_CRS_HOME.  If it is

# NOT instantiated, then the tool is a downloaded version.

# Set HOME_VER to true based on the value of $INSTALLED_VERSION_FLAG

if [ x"$INSTALLED_VERSION_FLAG" = x"true" ]

then

   ORACLE_HOME=/g01/11.2.0/grid

   HOME_VER=1     # HOME_VER

   TOOL_ARGS="$ORACLE_HOME $TOOL_ARGS"

else

   HOME_VER=0

fi

 

# Save current working directory

CURR_DIR=`pwd`

 

# If CURR_DIR is different from TOOL_DIR get that location and cd into it.

TOOL_REL_PATH=`$DIRNAME $0`

cd $TOOL_REL_PATH

 

DOT=`$ECHO $TOOL_REL_PATH | $AWK -F'/' '{ print $1}'`

 

if [ "$DOT" = "." ];

then

  TOOL_DIR=$CURR_DIR/$TOOL_REL_PATH

elif [ `expr "$DOT" : '.*'` -gt 0 ];

then

  TOOL_DIR=$CURR_DIR/$TOOL_REL_PATH

else

  TOOL_DIR=$TOOL_REL_PATH

fi

 

# Check if this script is run as root.  If so, then error out.

# This is fix for bug 5024086.

 

RUID=`$ID|$AWK -F\( '{print $2}'|$AWK -F\) '{print $1}'`

if [ ${RUID} = "root" ];then

$ECHO "You must not be logged in as root to run $0."

$ECHO "Log in as Oracle user and rerun $0."

exit $ROOT_USER

fi

 

# DEFINE FUNCTIONS BELOW

computeArchFlag() {

   TOOL_HOME=$1

   case `$UNAME` in

      HP-UX)

         if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK -F\- '{print $2}' | $AWK '{print $1}'`" = "64" ];then

            ARCHITECTURE_FLAG="-d64"

         fi

      ;;

      AIX)

         if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK '{print $1}' | $AWK -F\- '{print $1}'`" = "64" ];then

            ARCHITECTURE_FLAG="-d64"

         fi

      ;;

      *)

         if [ "`/usr/bin/file $TOOL_HOME/bin/kfod | $AWK -F\: '{print $2}' | $AWK '{print $2}' | $AWK -F\- '{print $1}'`" = "64" ];then

            ARCHITECTURE_FLAG="-d64"

         fi

      ;;

   esac

}

 

if [ $HOME_VER = 1 ];

then

   $ECHO "Checking for required files and bootstrapping ..."

   $ECHO "Please wait ..."

   TEMP_LOC=`$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/deinstall/bootstrap.pl $HOME_VER $TOOL_ARGS`

   TOOL_DIR=$TEMP_LOC

else

   TEMP_LOC=`$TOOL_DIR/perl/bin/perl $TOOL_DIR/bootstrap.pl $HOME_VER $TOOL_ARGS`

fi

 

computeArchFlag $TOOL_DIR

 

$TOOL_DIR/perl/bin/perl $TOOL_DIR/deinstall.pl $HOME_VER $TEMP_LOC $TOOL_DIR $ARCHITECTURE_FLAG $TOOL_ARGS

 

[grid@rac1 deinstall]$ ./deinstall

 

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2011-08-31_11-59-55PM/logs/

 

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

 

######################### CHECK OPERATION START #########################

Install check configuration START

 

Checking for existence of the Oracle home location /g01/11.2.0/grid

Oracle Home type selected for de-install is: CRS

Oracle Base selected for de-install is: /g01/orabase

Checking for existence of central inventory location /g01/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /g01/11.2.0/grid

The following nodes are part of this cluster: rac1,rac2

 

Install check configuration END

 

Skipping Windows and .NET products configuration check

 

Checking Windows and .NET products configuration END

 

Traces log file: /tmp/deinstall2011-08-31_11-59-55PM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]

>

 

The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"

Enter the IP netmask of Virtual IP "172.1.1.206" on node "rac1"[255.255.255.0]

>

 

Enter the network interface name on which the virtual IP address "172.1.1.206" is active

>

 

Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]

>

 

The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"

Enter the IP netmask of Virtual IP "172.1.1.207" on node "rac2"[255.255.255.0]

>

 

Enter the network interface name on which the virtual IP address "172.1.1.207" is active

>

 

Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]

>

 

The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"

Enter the IP netmask of Virtual IP "172.1.1.204" on node "rac3"[255.255.255.0]

>

 

Enter the network interface name on which the virtual IP address "172.1.1.166" is active

>

 

Enter an address or the name of the virtual IP[]

>

 

Network Configuration check config START

 

Network de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/

netdc_check2011-09-01_12-01-50-AM.log

 

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:

 

Network Configuration check config END

 

Asm Check Configuration START

 

ASM de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/

asmcadc_check2011-09-01_12-01-51-AM.log

 

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]:

ASM was not detected in the Oracle Home

 

######################### CHECK OPERATION END #########################

 

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /g01/11.2.0/grid

The cluster node(s) on which the Oracle home de-installation will be performed are:rac1,rac2,rac3

Oracle Home selected for de-install is: /g01/11.2.0/grid

Inventory Location where the Oracle home registered is: /g01/oraInventory

Skipping Windows and .NET products configuration check

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

ASM was not detected in the Oracle Home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2011-08-31_11-59-55PM/logs/deinstall_deconfig2011-09-01_12-01-15-AM.out'

Any error messages from this session will be written to: '/tmp/deinstall2011-08-31_11-59-55PM/logs/deinstall_deconfig2011-09-01_12-01-15-AM.err'

 

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/asmcadc_clean2011-09-01_12-02-00-AM.log

ASM Clean Configuration END

 

Network Configuration clean config START

 

Network de-configuration trace file location: /tmp/deinstall2011-08-31_11-59-55PM/logs/netdc_clean2011-09-01_12-02-00-AM.log

 

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

 

De-configuring listener: LISTENER

    Stopping listener: LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring listener: LISTENER_SCAN1

    Stopping listener: LISTENER_SCAN1

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

 

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

 

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

 

De-configuring backup files on all nodes...

Backup files de-configured successfully.

 

The network configuration has been cleaned up successfully.

 

Network Configuration clean config END

 

---------------------------------------->

 

The deconfig command below can be executed in parallel on all the remote nodes.

Execute the command on  the local node after the execution completes on all the remote nodes.

 

Run the following command as the root user or the administrator on node "rac3".

 

/tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib

-I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl

-force  -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

 

Run the following command as the root user or the administrator on node "rac2".

 

/tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib

-I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force

-deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

 

Run the following command as the root user or the administrator on node "rac1".

 

/tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib

-I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl

-force  -deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

-lastnode

 

Press Enter after you finish running the above commands

 

執行deinstall過程中會要求以root用戶在所有平臺上執行相關命令

 

su - root

 

[root@rac3 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib

-I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force

-deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp

PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type

PRCR-1068 : Failed to query resources

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.gsd is registered

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.ons is registered

Cannot communicate with crsd

 

ACFS-9200: Supported

CRS-4535: Cannot communicate with Cluster Ready Services

CRS-4000: Command Stop failed, or completed with errors.

CRS-4544: Unable to connect to OHAS

CRS-4000: Command Stop failed, or completed with errors.

Successfully deconfigured Oracle clusterware stack on this node

 

[root@rac2 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib -I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force  -deconfig -paramfile

"/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : nodeapps object is not supported in Oracle Restart

ACFS-9200: Supported

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

You must kill crs processes or reboot the system to properly

cleanup the processes started by Oracle clusterware

ACFS-9313: No ADVM/ACFS installation detected.

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command 1 /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node

 

[root@rac1 ~]# /tmp/deinstall2011-08-31_11-59-55PM/perl/bin/perl -I/tmp/deinstall2011-08-31_11-59-55PM/perl/lib

-I/tmp/deinstall2011-08-31_11-59-55PM/crs/install /tmp/deinstall2011-08-31_11-59-55PM/crs/install/rootcrs.pl -force

-deconfig -paramfile "/tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Using configuration parameter file: /tmp/deinstall2011-08-31_11-59-55PM/response/deinstall_Ora11g_gridinfrahome1.rsp

Adding daemon to inittab

crsexcl failed to start

Failed to start the Clusterware. Last 20 lines of the alert log follow:

2011-08-31 23:36:55.813

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:38:23.855

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:39:03.873

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:39:11.707

[/g01/11.2.0/grid/bin/orarootagent.bin(4559)]CRS-5822:Agent '/g01/11.2.0/grid/bin/orarootagent_root'

disconnected from server. Details at (:CRSAGF00117:) {0:2:27} in

/g01/11.2.0/grid/log/rac1/agent/crsd/orarootagent_root/orarootagent_root.log.

2011-08-31 23:39:12.725

[ctssd(4067)]CRS-2405:The Cluster Time Synchronization Service on host rac1 is shutdown by user

2011-08-31 23:39:12.764

[mdnsd(3868)]CRS-5602:mDNS service stopping by request.

2011-08-31 23:39:13.987

[/g01/11.2.0/grid/bin/orarootagent.bin(3892)]CRS-5016:Process "/g01/11.2.0/grid/bin/acfsload"

spawned by agent "/g01/11.2.0/grid/bin/orarootagent.bin" for action "check" failed:

details at "(:CLSN00010:)" in "/g01/11.2.0/grid/log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log"

2011-08-31 23:39:27.121

[cssd(3968)]CRS-1603:CSSD on node rac1 shutdown by user.

2011-08-31 23:39:27.130

[ohasd(3639)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE

2011-08-31 23:39:31.926

[gpnpd(3880)]CRS-2329:GPNPD on node rac1 shutdown.

 

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : scan_listener object is not supported in Oracle Restart

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : scan_listener object is not supported in Oracle Restart

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : scan object is not supported in Oracle Restart

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : scan object is not supported in Oracle Restart

Usage: srvctl [command] [object] []

    commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config

    objects: database|service|asm|diskgroup|listener|home|ons

For detailed help on each command and object and its options use:

  srvctl [command] -h or

  srvctl [command] [object] -h

PRKO-2012 : nodeapps object is not supported in Oracle Restart

ACFS-9200: Supported

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Modify failed, or completed with errors.

Adding daemon to inittab

crsexcl failed to start

Failed to start the Clusterware. Last 20 lines of the alert log follow:

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time

Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:38:23.855

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time

Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:39:03.873

[ctssd(4067)]CRS-2408:The clock on host rac1 has been updated by the Cluster Time

Synchronization Service to be synchronous with the mean cluster time.

2011-08-31 23:39:11.707

[/g01/11.2.0/grid/bin/orarootagent.bin(4559)]CRS-5822:Agent '/g01/11.2.0/grid/bin/orarootagent_root'

disconnected from server. Details at (:CRSAGF00117:) {0:2:27} in

/g01/11.2.0/grid/log/rac1/agent/crsd/orarootagent_root/orarootagent_root.log.

2011-08-31 23:39:12.725

[ctssd(4067)]CRS-2405:The Cluster Time Synchronization Service on host rac1 is shutdown by user

2011-08-31 23:39:12.764

[mdnsd(3868)]CRS-5602:mDNS service stopping by request.

2011-08-31 23:39:13.987

[/g01/11.2.0/grid/bin/orarootagent.bin(3892)]CRS-5016:Process

"/g01/11.2.0/grid/bin/acfsload" spawned by agent "/g01/11.2.0/grid/bin/orarootagent.bin" for action

"check" failed: details at "(:CLSN00010:)" in

"/g01/11.2.0/grid/log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log"

2011-08-31 23:39:27.121

[cssd(3968)]CRS-1603:CSSD on node rac1 shutdown by user.

2011-08-31 23:39:27.130

[ohasd(3639)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE

2011-08-31 23:39:31.926

[gpnpd(3880)]CRS-2329:GPNPD on node rac1 shutdown.

[client(13099)]CRS-10001:01-Sep-11 00:11 ACFS-9200: Supported

 

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

crsctl delete for vds in SYSTEMDG ... failed

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Delete failed, or completed with errors.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

ACFS-9313: No ADVM/ACFS installation detected.

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command 1 /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node

 

回到最初運行deintall的終端摁下回車

 

The deconfig command below can be executed in parallel on all the remote nodes.

Execute the command on  the local node after the execution completes on all the remote nodes.

 

Press Enter after you finish running the above commands

 

<----------------------------------------

 

Removing Windows and .NET products configuration END

Oracle Universal Installer clean START

 

Detach Oracle home '/g01/11.2.0/grid' from the central inventory on the local node : Done

 

Delete directory '/g01/11.2.0/grid' on the local node : Done

 

Delete directory '/g01/oraInventory' on the local node : Done

 

Delete directory '/g01/orabase' on the local node : Done

 

Detach Oracle home '/g01/11.2.0/grid' from the central inventory on the remote nodes 'rac1,rac2' : Done

 

Delete directory '/g01/11.2.0/grid' on the remote nodes 'rac1,rac2' : Done

 

Delete directory '/g01/oraInventory' on the remote nodes 'rac1' : Done

 

Delete directory '/g01/oraInventory' on the remote nodes 'rac2' : Failed <<<<

 

The directory '/g01/oraInventory' could not be deleted on the nodes 'rac2'.

Delete directory '/g01/orabase' on the remote nodes 'rac2' : Done

 

Delete directory '/g01/orabase' on the remote nodes 'rac1' : Done

 

Oracle Universal Installer cleanup completed with errors.

 

Oracle Universal Installer clean END

 

Oracle install clean START

 

Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-59-55PM' on node 'rac1'

Clean install operation removing temporary directory '/tmp/deinstall2011-08-31_11-59-55PM' on node 'rac2'

 

Oracle install clean END

 

######################### CLEAN OPERATION END #########################

 

####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped and successfully de-configured on node "rac3"

Oracle Clusterware is stopped and successfully de-configured on node "rac2"

Oracle Clusterware is stopped and successfully de-configured on node "rac1"

Oracle Clusterware is stopped and de-configured successfully.

Skipping Windows and .NET products configuration clean

Successfully detached Oracle home '/g01/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/g01/11.2.0/grid' on the local node.

Successfully deleted directory '/g01/oraInventory' on the local node.

Successfully deleted directory '/g01/orabase' on the local node.

Successfully detached Oracle home '/g01/11.2.0/grid' from the central inventory on the remote nodes 'rac1,rac2'.

Successfully deleted directory '/g01/11.2.0/grid' on the remote nodes 'rac2,rac3'.

Successfully deleted directory '/g01/oraInventory' on the remote nodes 'rac3'.

Failed to delete directory '/g01/oraInventory' on the remote nodes 'rac2'.

Successfully deleted directory '/g01/orabase' on the remote nodes 'rac2'.

Successfully deleted directory '/g01/orabase' on the remote nodes 'rac3'.

Oracle Universal Installer cleanup completed with errors.

 

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.

 

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1 rac2 ' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

 

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

deintall運行完成後會提示讓你在必要的節點上運行"rm-rf /etc/oraInst.loc""rm -rf /opt/ORCLfmap",照做即可。
以上腳本運行完成後各節點上的GI已被刪除,且/etc/inittab檔已還原為非GI版,/etc/init.d下的CRS相關腳本也已相應刪除。

8.2 創建資料庫時找不到磁片組

DBCA創建資料庫時找不到所需要的磁片組,請檢查oracle安裝用戶的主組和輔助組是否正確,兩台機器是否一致。

 

沒有留言:

LinkWithin-相關文件

Related Posts Plugin for WordPress, Blogger...