星期五, 6月 01, 2012

Install Oracle 11gR2 RAC on HP-UX&AIX&RHEL


本文出自 “alexatrebooting” 博客,請務必保留此出處http://alexatrebooting.blog.51cto.com/3219360/862232

Install Oracle 11gR2 RAC on HP-UX&AIX&RHEL

3OS上安裝11gR2 RAC,希望對你有幫助。

1. 系統環境
硬體:
HP Rx2600HP Rx3600San Switch 1台,EVA4400存儲一套;
      AIX P570-1AIXP570-2San Swirch  1台,DS8300存儲一套
      PC server1PC server2San Switch  1台,EMC NS-480一套
軟體:
 hpia64_11gR2_grid.ziphpia64_11gR2_database.zip
      Aix6L64_11gR2_grid.zipaix6L64_11gR2_database.zip
      linux.x64_11gR2_grid.zip linux.x64_11gR2_database.zip
安裝規劃:

節點
節點名稱
實例名稱
資料庫名稱
處理器
RAM
作業系統
node1
rac1
RAC
2*1.900 GHz
4GB
HP-UX/AIX/RHEL
node2
rac2
2 *1.900 GHz
4GB
HP-UX/AIX/RHEL
網路配置
節點
名稱
專用
IP 地址
公共
IP 地址
虛擬 IP 位址
SCAN 名稱
SCAN IP 地址
node1
1.1.1.1
11.1.1.1
11.1.1.11
r-cluster-scan
11.1.1.21
node2
1.1.1.2
11.1.1.2
11.1.1.12
Oracle 軟體元件
軟體元件
作業系統用戶
主組
輔助組
主目錄
Oracle 基目錄/Oracle 主目錄
grid Infrast
grid
oinstall
asmadminasmdbaasmoper
/home/grid
/u01/app/crs_base
/u01/app/crs_home
oracle RAC
oracle
oinstall
dbaoperasmdba
/home/oracle
/u01/app/oracle
/u01/app/oracle/product/11.2.0/db_1
存儲元件
存儲元件
檔系統
卷大小
ASM 卷組名
ASM 冗餘
設備名
OCR/
VF
ASM
300G
CRSDG1
External
oraocrs1~3
數據/
恢復區
ASM
300G
DATADG1
External
oradata4`6





















2. 安裝前的準備
2.1.       軟體及補丁列表
2.1.1 HP-UX軟體及補丁列表
補丁列表如下:
PHCO_40381 11.31 Disk Owner Patch
PHKL_38038 vm cumulative patch
PHKL_38938 11.31 SCSI cumulative I/O patch
PHKL_39351 Scheduler patch : post wait hang
PHSS_36354 11.31 assembler patch
PHSS_37042 11.31 hppac (packed decimal)
PHSS_37959 Libcl patch for alternate stack issue fix
(QXCR1000818011)
PHSS_39094 11.31 linker + fdp cumulative patch
PHSS_39100 11.31 Math Library Cumulative Patch
PHSS_39102 11.31 Integrity Unwind Library
PHSS_38141 11.31 aC++ Runtime
補丁下載地址:
HP provides patch bundles at
Individual patches can be downloaded from
安裝補丁命令:
#swinstall –s $PATH/patchname

2.1.2 AIX軟體及補丁列表
AIX 6.1 /5.3 required packages:
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat 5.3.9.0 or later (AIX 5.3)
bos.perf.libperfstat 6.1.2.1 or later (AIX 6.1)
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix50.rte:10.1.0.0 or later (AIX 5.3)
xlC.aix61.rte:10.1.0.0 or later (AIX 6.1)
gpfs.base 3.2.1.8 or later (Only for RAC)
Authorized Problem Analysis Reports (APARs) for AIX 5L:
IZ42940
IZ49516
IZ52331

2.1.3 RHEL 軟體及補丁列表
RHEL5,OEL5:
Refer to Note 880989.1
binutils-2.17.50.0.6-6.el5 (x86_64)
compat-libstdc++-33-3.2.3-61 (x86_64) << both ARCH's are required. See next line.
compat-libstdc++-33-3.2.3-61 (i386) << both ARCH's are required. See previous line.
elfutils-libelf-0.125-3.el5 (x86_64)
glibc-2.5-24 (x86_64) << both ARCH's are required. See next line.
glibc-2.5-24 (i686) << both ARCH's are required. See previous line.
glibc-common-2.5-24 (x86_64)
ksh-20060214-1.7 (x86_64)
libaio-0.3.106-3.2 (x86_64) << both ARCH's are required. See next line.
libaio-0.3.106-3.2 (i386) << both ARCH's are required. See previous line.
libgcc-4.1.2-42.el5 (i386) << both ARCH's are required. See next line.
libgcc-4.1.2-42.el5 (x86_64) << both ARCH's are required. See previous line.
libstdc++-4.1.2-42.el5 (x86_64) << both ARCH's are required. See next line.
libstdc++-4.1.2-42.el5 (i386) << both ARCH's are required. See previous line.
make-3.81-3.el5 (x86_64)
The remaining Install Guide requirements will have to be installed:
elfutils-libelf-devel-0.125-3.el5.x86_64.rpm
a.) requires elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm as a prerequisite, as listed below.
b.) elfutils-libelf-devel and elfutils-libelf-devel-static each depend upon the other. Therefore, they must be installed together, in one (1) "rpm -ivh" command as follows:
rpm -ivh elfutils-libelf-devel-0.125-3.el5.x86_64.rpm elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm
glibc-headers-2.5-24.x86_64.rpm
a.) requires kernel-headers-2.6.18-92.el5.x86_64.rpm as a prerequisite, as listed below
glibc-devel-2.5-24.x86_64.rpm << both ARCH's are required. See next item.
glibc-devel-2.5-24.i386.rpm << both ARCH's are required. See previous item.
gcc-4.1.2-42.el5.x86_64.rpm
a.) requires libgomp-4.1.2-42.el5.x86_64.rpm as a prerequisite, as listed below
libstdc++-devel-4.1.2-42.el5.x86_64.rpm
gcc-c++-4.1.2-42.el5.x86_64.rpm
libaio-devel-0.3.106-3.2.x86_64.rpm << both ARCH's are required. See next item
libaio-devel-0.3.106-3.2.i386.rpm << both ARCH's are required. See previous item.
sysstat-7.0.2-1.el5.x86_64.rpm
unixODBC-2.2.11-7.1.x86_64.rpm << both ARCH's are required. See next item
unixODBC-2.2.11-7.1.i386.rpm << both ARCH's are required. See previous item.
unixODBC-devel-2.2.11-7.1.x86_64.rpm << both ARCH's are required. See next item
unixODBC-devel-2.2.11-7.1.i386.rpm << both ARCH's are required. See previous item.
RHEL4,OEL4:
Refer to Note 880942.1
SLES10:-
Refer to Note 884435.1
SLES11 :-
Refer to Note 881044.1
2.2.       內核設置
2.2.1 HP-UX 內核參數設置如下:
   1.內核參數設置如下
NPROC 4096
KSI_ALLOC_MAX (NPROC*8)
EXECUTABLE_STACK=0
MAX_THREAD_PROC 1024
MAXDSIZ 1073741824
MAXDSIZ_64BIT 2147483648
MAXTSIZE_64BIT 1073741824
MAXSSIZ 134217728 bytes
MAXSSIZ_64BIT 1073741824
MAXUPRC ((NPROC*9)/10)+1
MSGMAP (MSGTQL+2) *
MSGMNI (NPROC)
MSGSEG 32767 *
MSGTQL (NPROC)
NCSIZE (NINODE+1024)
NFILE (15*NPROC+2048) *
NFLOCKS (NPROC)
NINODE (8*NPROC+2048)
NKTHREAD (((NPROC*7)/4)+16)
SEMMNI (NPROC)
SEMMNS (SEMMNI*2)
SEMMNU (NPROC - 4)
SEMVMX 32767
SHMMAX AvailMem
SHMMNI 4096
SHMSEG 512
VPS_CEILING 64
內核設置命令:
#kctune  PARAMETRE=value
2.在各節點上設置鏈結:
# cd /usr/lib
# ln -s libX11.3 libX11.sl
# ln -s libXIE.2 libXIE.sl
# ln -s libXext.3 libXext.sl
# ln -s libXhp11.3 libXhp11.sl
# ln -s libXi.3 libXi.sl
# ln -s libXm.4 libXm.sl
# ln -s libXp.2 libXp.sl
# ln -s libXt.3 libXt.sl
# ln -s libXtst.2 libXtst.sl

2.2.2 AIX內核參數設置如下:
set AIXTHREAD_SCOPE=S in the environment:(Part Number E10839-04)
export AIXTHREAD_SCOPE=S
1.       修改Oracle用戶的系統限制參數
root user editing the /etc/security/limits
#vi /etc/security/limits.conf
default:
        fsize = -1
        core = 2097151
        cpu = -1
        data = -1
        rss = -1
        stack = -1
        nofiles = -1

   2.修改系統配置參數
# ioo –o aio_maxreqs          
aio_maxreqs = 65536  
# lsattr -E -l sys0 -a maxuproc
# smit chgsys
maxuproc 16384 Maximum number of PROCESSES allowed per user True

3.虛擬記憶體參數設置:
vmo -p -o minperm%=3
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90
vmo -p -o lru_file_repage=0
vmo -p -o strict_maxclient=1
vmo -p -o strict_maxperm=0

4.配置網路參數
Add the following line to the /etc/rc.net
if [ -f /usr/sbin/no ] ; then
   /usr/sbin/no -o udp_sendspace=65536
   /usr/sbin/no -o udp_recvspace=655360
   /usr/sbin/no -o tcp_sendspace=65536
   /usr/sbin/no -o tcp_recvspace=65536
   /usr/sbin/no -o rfc1323=1
   /usr/sbin/no -o sb_max=2*655360
   /usr/sbin/no -r -o ipqmaxlen=512
fi

ipqmaxlen parameter:設置後需重啟機器使其生效;
/usr/sbin/no -r -o ipqmaxlen=512

2.2.3 RHEL內核參數設置如下:
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 512 x processes (for example 6815744 for 13312 processes)
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.shmall = physical RAM size / pagesize For most systems, this will be the value 2097152. See Note: 301830.1 for more information.
kernel.shmmax = 1/2 of physical RAM, but not greater than 4GB. This would be the value 2147483648 for a system with 4Gb of physical RAM.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128

1.修改Oracle用戶的系統限制參數
#vi /etc/security/limits.conf
grid                      soft     nproc    2047
grid                      hard     nproc    16384
grid                      soft     nofile 1024
grid                      hard     nofile 65536
oracle                    soft     nproc    2047
oracle                    hard     nproc    16384
oracle                    soft     nofile 1024
oracle                    hard     nofile 65536
   2.修改系統配置參數
        #vi /etc/sysctl.conf
        fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 8388608
kernel.shmmax = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
        執行更新
sysctl –p

2.3.       配置NTP
2.3.1 HP-UX配置NTP
1.配置NTP伺服器端
 #vi /etc/ntp.conf
server   127.127.1.1
fudge            127.127.1.1 stratum 10
#vi /etc/rc.config.d/netdaemons
NTPDATE_SERVER=
XNTPD=1
XNTPD_ARGS="-x"
# /sbin/init.d/xntpd   start
2.配置NTP客戶機
#vi /etc/ntp.conf
server 128.1.1.1                        # 假設 128.1.1.1NTP伺服器的 IP地址
driftfile   /etc/ntp.drift

# vi /etc/rc.config.d/netdaemons
NTPDATE_SERVER=128.1.1.1        # 假設128.1.1.1 I NTP伺服器的IP地址
XNTPD=1
XNTPD_ARGS="-x"
# /sbin/init.d/xntpd   start
3.確定ntp是否工作, 通過運行ntpq -p命令檢查確認你的客戶機適當的關聯形式。
   #/usr/bin/ntpq -p
  remote           refid          st t when poll reach   delay   offset    disp
===================================================================
*rx2600          LOCAL(1)      4 u 37 64 377   0.14    7.495    0.18

2.3.2 AIX配置NTP
111gR2 RAC自帶CTSS時間同步服務,因此安裝文檔中要求禁用NTP,但是在安裝過程中最後檢查的時候,仍然會報NTP服務無法使用,可以直接忽略。
# stopsrc -s xntpd
安裝完成後使用Grid用戶執行,啟動時間同步服務
   $ crsctl stat resource ora.ctssd -t -init

2NTP配置
   修改slewing 選項
 #Vi  /etc/rc.tcpip
   start /usr/sbin/xntpd "$src_running" "-x"  

3.在兩個節點啟動守護進程
 # startsrc -s xntpd -a "-x"

2.3.3 RHEL配置NTP
1.伺服器端配置
# vi /etc/ntp.conf 增加一行
restrict 10.20.28.0 mask 255.255.255.0 nomodify notrap

注意 /etc/hosts的文件應如下:
127.0.0.1           localhost    loopback

2.用戶端配置:
ntpdate 10.20.28.39
Crontab -e
0-59/10 * * * * /usr/bin/ntpdate 10.20.28.39

3.修改slewing 選項
vi etc/sysconfig/ntpd 檔,添加 -x 標誌,如下例所示:
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

4.在兩個節點啟動守護進程
Chkconfig ntpd on

2.4.       DNS配置
由於節點數一般較少採用SCAN沒有實際意義,採用一個小的trick欺騙cluvfy工具,保證在驗證的時候正常通過;
#[/]mv /usr/bin/nslookup /usr/bin/nslookup.org
#[/]cat /usr/bin/nslookup
#!/usr/bin/sh
HOSTNAME=${1}
if [[ $HOSTNAME = "r-cluster-scan" ]]; then
    echo "Server:         24.154.1.34"
    echo "Address:        24.154.1.34#53"
    echo "Non-authoritative answer:"
    echo "Name:   r-cluster-scan"
    echo "Address: 11.1.1.21" #假設11.1.1.21 SCAN地址
else
    /usr/bin/nslookup.org $HOSTNAME
fi

注意:if you need to modify your SQLNET.ORA, ensure that EZCONNECT is in the list if you specify the order of the naming methods used for client name resolution lookups (11g Release 2 default is NAMES.DIRECTORY_PATH=(tnsnames, ldap, ezconnect)).

2.5.       創建用戶目錄及環境變數設置
2.5.1 HP-UX創建用戶目錄及環境變數設置

1.創建用戶及相應的目錄
#/usr/sbin/groupadd -g 501 oinstall
#/usr/sbin/groupadd -g 502 dba
#/usr/sbin/groupadd -g 503 oper
/#usr/sbin/groupadd -g 504 asmadmin
#/usr/sbin/groupadd -g 505 asmoper
#/usr/sbin/groupadd -g 506 asmdba
#/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/#usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
#mkdir –p /u01/app/crs_base
#mkdir –p /u01/app/crs_home
#mkdir –p /u01/app/oracle/product/11.2.0/db_1
#chown –R oracle:oinstall /u01/app/oracle
#chown –R grid:ointall /u01/app/crs*
# chown grid:asmadmin /dev/rdsk/cxtydz
# chmod 660 /dev/rdsk/cxtydz
# chown grid:asmadmin /dev/rdisk/cxtydz
# chmod 660 /dev/rdisk/cxtydz

注:Before installation, OCR files must be owned by the user performing the installation (grid or oracle). That installation user must have oinstall as its primary group. During installation, OUI changes ownership of the OCR files to root.

To protect the OCR from logical disk failure, create another ASM diskgroup after installation and add the OCR to the second diskgroup using the ocrconfig command.

2.配置GRID用戶環境變數
#su – grid
for Grid User:   grid user's HOME dir can't be the BASE DIR subdir;
#more .profile Grid用戶環境變數)
export PS1="`/usr/bin/hostname`-> "
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.
/usr/local/bin/bash
#
if [ -t 0 ]; then
   stty intr ^C
fi
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi
3.配置Oracle用戶環境變數;
#su – oracle
#more .profile oracle 環境變數)
export PS1="`/usr/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORA_GRID_HOME=/u01/app/crs_home/
export ORACLE_OWNER=oracle
export ORACLE_SID=dbrac1
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi

:
#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile file:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi

if [ -t 0 ]; then
   stty intr ^C
fi

#For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:

if ( $USER == "oracle" || $USER == "grid" ) then
        limit maxproc 16384
        limit descriptors 65536
endif

test -t 0
if ($status == 0) then
   stty intr ^C
endif

2.5.2 AIX創建用戶目錄及環境變數設置
1.創建用戶及目錄
mkgroup -'A' id='601' adms='root' oinstall
mkgroup -'A' id='602' adms='root' dba
mkgroup -'A' id='603' adms='root' oper
mkgroup -'A' id='604' adms='root' asmadmin
mkgroup -'A' id='605' adms='root' asmdba
mkgroup -'A' id='606' adms='root' asmoper

mkuser id='601' pgrp='oinstall' groups='asmadmin,asmdba,asmoper,dba,oper'  home='/home/grid' grid
mkuser id='602' pgrp='oinstall' groups='dba,oper,asmdba' home='/home/oracle' oracle
/usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01

Passwd oracle
Passwd grid
/usr/bin/lsuser -a capabilities grid

2.修改默認的Grid 用戶的環境變數:
#vi /home/grid/.profile
umask 0022
export PS1="`/usr/bin/hostname`-> "
export ORACLE_HOSTNAME=`hostname`
export JAVA_HOME=/usr/local/java
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH:/usr/local/bin/:.
export AIXTHREAD_SCOPE=S
#/usr/local/bin/bash
if [ -t 0 ]; then
   stty intr ^C
fi

3.修改默認的Oracle用戶的環境變數
    #vi /home/oracle/.profile
export PS1="`/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_OWNER=oracle
export ORACLE_SID=tiqs21
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
export AIXTHREAD_SCOPE=S
#/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
umask 022
#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile  
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi

2.5.3 RHEL創建用戶目錄及環境變數設置
1.創建用戶及目錄
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 505 asmoper
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R root:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/app/crs*
chmod -R 775 /u01
chmod -R 755 /u01/app/crs*
# chown grid:asmadmin /dev/dm*
# chmod 660 /dev/dm*
Passwd oracle<
oracle
oracle
EOF
Passwd grid<
oracle
oracle
EOF

2.修改默認的Grid 用戶的環境變數
#Vi /home/grid/.bash_profile
umask 0022
export PS1="`/bin/hostname`-> "
#export ORACLE_HOSTNAME=10.20.28.37
export JAVA_HOME=/usr/local/java
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/crs_base
export ORACLE_HOME=/u01/app/crs_home
export PATH=$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH:/usr/local/bin/:.
#/usr/local/bin/bash
if [ -t 0 ]; then
   stty intr ^C
fi

3.修改默認的Oracle用戶的環境變數
export PS1="`/bin/hostname`-> "
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
#export ORA_GRID_HOME=$ORACLE_BASE/product/11.2.0/crs_1
export ORACLE_OWNER=oracle
export ORACLE_SID=tiqs21
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_GRID_HOME/bin:/sbin:/usr/sbin:/bin:/usr/local/bin:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export NLS_LANG=american_america.ZHS16GBK
export ORACLE_PATH=/home/oracle
#/usr/local/bin/bash
if [ -t 0 ]; then
stty intr ^C
fi
umask 022

#For the Bourne, Bash, or Korn shell, add lines similar to the following to the /etc/profile file:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
        umask 022
fi


2.6.       配置命名方案
修改/etc/hosts檔內容,設置IP和主機名稱。
11.1.1.2   node2
1.1.1.2    node2-priv
11.1.1.12   node2-vip
11.1.1.21   r-cluster-scan
11.1.1.1   node1
1.1.1.1    node1-priv
11.1.1.11   node1-vip
127.0.0.1       localhost       loopback
3存儲設置
3.1 HP-UX 存儲設置
EVA配置頁面:
You must have space available on Automatic Storage Management for Oracle Clusterware files (voting disks and Oracle Cluster Registries), and for Oracle Database files, if you install standalone or Oracle Real Application Clusters Databases. Creating Oracle Clusterware files on block or raw devices is no longer supported for new installations.
Note:
When using Oracle Automatic Storage Management (Oracle ASM) for either the Oracle Clusterware files or Oracle Database files, Oracle creates one Oracle ASM instance on each node in the cluster, regardless of the number of databases.

Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an ASM diskgroup, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted
If you create a diskgroup during installation, then it must be at least 2 GB.
If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
 All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
 Do not specify multiple partitions on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
 Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

#ioscan -funN -C disk
# /usr/sbin/insf -e
rx2600#[/dev/rdisk]ioscan -m dsf
Persistent DSF           Legacy DSF(s)
========================================
/dev/pt/pt2              /dev/rscsi/c9t0d0
                         /dev/rscsi/c11t0d0
/dev/rdisk/disk2         /dev/rdsk/c0t0d0
/dev/rdisk/disk3         /dev/rdsk/c2t1d0
/dev/rdisk/disk3_p1      /dev/rdsk/c2t1d0s1
/dev/rdisk/disk3_p2      /dev/rdsk/c2t1d0s2
/dev/rdisk/disk3_p3      /dev/rdsk/c2t1d0s3
/dev/rdisk/disk8         /dev/rdsk/c10t0d1
                         /dev/rdsk/c12t0d1
/dev/rdisk/disk9         /dev/rdsk/c10t0d2
                         /dev/rdsk/c12t0d2
/dev/rdisk/disk18        /dev/rdsk/c10t0d3
                         /dev/rdsk/c12t0d3
/dev/rdisk/disk19        /dev/rdsk/c10t0d4
                         /dev/rdsk/c12t0d4
/dev/rdisk/disk20        /dev/rdsk/c10t0d5
                         /dev/rdsk/c12t0d5
/dev/rdisk/disk21        /dev/rdsk/c10t0d6
                         /dev/rdsk/c12t0d6
rx2600#[/dev/rdisk]


rx3600#[/dev/rdisk]ioscan -m dsf
Persistent DSF           Legacy DSF(s)
========================================
/dev/rdisk/disk1         /dev/rdsk/c0t0d0
/dev/rdisk/disk1_p1      /dev/rdsk/c0t0d0s1
/dev/rdisk/disk1_p2      /dev/rdsk/c0t0d0s2
/dev/rdisk/disk1_p3      /dev/rdsk/c0t0d0s3
/dev/pt/pt2              /dev/rscsi/c2t0d0
                         /dev/rscsi/c4t0d0
/dev/rdisk/disk3         /dev/rdsk/c1t0d0
/dev/rdisk/disk7         /dev/rdsk/c3t0d1
                         /dev/rdsk/c5t0d1
/dev/rdisk/disk10        /dev/rdsk/c3t0d2
                         /dev/rdsk/c5t0d2
/dev/rdisk/disk19        /dev/rdsk/c3t0d3
                         /dev/rdsk/c5t0d3
/dev/rdisk/disk20        /dev/rdsk/c3t0d4
                         /dev/rdsk/c5t0d4
/dev/rdisk/disk21        /dev/rdsk/c3t0d5
                         /dev/rdsk/c5t0d5
/dev/rdisk/disk22        /dev/rdsk/c3t0d6
                         /dev/rdsk/c5t0d6



保證兩個伺服器看到的/dev/rdisk下的盤符一致:
由於在兩個節點上看的設備名不一致,需使用如下指令在兩個節點上手工配置設備名:
#[/dev/rdisk]mknod ora_rac_1 c 13 0x000007
#[/dev/rdisk]mknod ora_rac_2 c 13 0x000008
#[/dev/rdisk]mknod ora_rac_3 c 13 0x000009
#[/dev/rdisk]mknod ora_rac_4 c 13 0x00000a
#[/dev/rdisk]mknod ora_rac_5 c 13 0x00000b
#[/dev/rdisk]mknod ora_rac_6 c 13 0x00000c

3.2 AIX 存儲設置
1.配置HACMP
1 創建集群
2 創建資源組
3 為資源組增加共用磁片
4 配置串口網路為心跳網路
5 配置IP網路
6 同步集群配置
7 測試HACMP集群
2.啟動集群
一個節點
# hostname
db01
# smitty clstart                 --啟動 hacmp
# lssrc -g cluster                --查看 hacmp的狀態

第二個節點
# hostname
db02
# smitty clstart
# lssrc -g cluster
3.檢查集群是否正常啟動
# lsvg -o
Datavg
4 確定OCRVotingDataFilegrid:asmadmin
chown grid:admadmin /dev/your_charactor_device

3.3 RHEL存儲設置
1.配置多路徑軟體
在存儲上present DISK to 安裝RAC的所有HOSTConfig Multipaths;
注釋以下3
blacklist {
                devnode "*"
}
 #chkconfig multipathd on
    #service multipathd restart

root@scdb10 etc]#           multipath -l |grep dm- |sort
mpath0 (36001438005ded1d60000700005100000) dm-2 HP,HSV450
mpath1 (36001438005ded1d60000700005140000) dm-3 HP,HSV450

2.安裝ASM
ASM package Download Website

oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm       oracleasm-2.6.18-164.el5-debuginfo-2.0.5-1.el5.x86_64.rpm oracleasm-support-2.1.3-1.el5.x86_64.rpm
oracleasm-2.6.18-164.el5debug-2.0.5-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm

#excute RPM install sequence
rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm
rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
3 配置Configure ASM
    在兩個節點上:
root@scdb9 dev]# service oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
4  檢查狀態
root@scdb9 soft]# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

5.做ASM標籤
兩台機上:
# sfdisk –s
/dev/dm-2: 524288000
/dev/dm-3: 524288000
6.在一台機上創建ASM盤:
[root@scdb9 ~]# service oracleasm createdisk DG01D000 /dev/dm-2
Marking disk "DG01D000" as an ASM disk: [ OK ]
root@scdb9 ~]# service oracleasm createdisk CRS01D001 /dev/dm-12
Marking disk "CRS01D001" as an ASM disk: [ OK ]

7.在兩個節點,執行ASM盤掃描
 /etc/init.d/oracleasm scandisks
/usr/sbin/oracleasm listdisks

修改磁片屬性:
chown grid:asmadmin /dev/dm*
chown grid:asmadmin /dev/mapper/mpath*

8.修改重啟後的磁片屬性檔
Cd /etc/udev/rules.d
4安裝前的檢查
4.1HP-UX 安裝前的檢查
1.額外的檢查
#bdf /home/grid
Ensure you have at least 4.5 GB of space for the grid infrastructure for a cluster home (Grid home) This includes Oracle Clusterware and Automatic Storage Management (Oracle ASM) files and log files.

#/bdf /tmp (大於1GTEMP空間)
Ensure that you have at least 1 GB of space in /tmp

#add default gateway
route add default 1.1.1.1 1
#vi /etc/rc.config.d/netconf
ROUTE_GATEWAY[0]=15.70.146.254
ROUTE_DESTINATION[0]=default
ROUTE_COUNT[0]=1

2.以上步驟執行完後,到Grid的安裝目錄執行如下指令:
#./runcluvfy.sh stage -pre crsinst -n rx2600,rx3600 -fixup –verbose
login as root
# sh /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
4.2 AIX 安裝前的檢查
1 驗證記憶體及設置Swap
/usr/sbin/lsattr -E -l sys0 -a realmem
 /usr/sbin/lsps -a
 swap –l
2 臨時表空大小
Tmp空間至少需要1G,使用如下命令檢查;
df –kg /tmp
如果空間/tmp不足1G,可手動設置環境變數
TEMP=/mount_point/tmp
TMPDIR=/mount_point/tmp
export TEMP TMPDIR
3 資料庫軟體分區要求
資料庫軟體通常安裝在/u01下,建議在系統中創建獨立的分區掛載點/u01
其中Grid12GDatabase9G,建議/u01>=40G
4 檢查已安裝的OS
lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat \
 bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte
驗證過程中出現的fix包,即使不安裝也不受影響;
# instfix -i -k "IZ41855 IZ51456 IZ52319"
 There was no data for IZ41855 in the fix database.
 There was no data for IZ51456 in the fix database.
     There was no data for IZ52319 in the fix database.

4.3          RHEL 安裝前的檢查
1.查看Linux系統版本資訊:
# cat /proc/version

2.檢查Linux系統套裝軟體的安裝情況:
# rpm -q package_name
rpm –q gcc
4.3 公共的安裝前檢查
注:其他的檢查選項:
#oracle用戶在有安裝盤的node中檢查網路連接配置是否正確
/app/clusterware/cluvfy/runcluvfy.sh comp nodecon –node1,node2 -verbose

#oracle用戶在有安裝盤的node中檢查硬體和作業系統是否合適:
/app/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n node1,node2 -verbose

#oracle用戶在有安裝盤的node中檢查有效的共用存儲:
/app/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dw/dsk/c1t2d3,/dw/dsk/c2t4d5

#oracle用戶檢查是否滿足安裝clusterware
/app/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose

#oracle用戶檢查安裝oracle software的條件是否具備:
$/app/clusterware/cluvfy/runcluvfy.sh stage -pre dbinst -n node1,node2 -verbose

#oracle用戶查看當前安裝情況是否滿足RAC db的創建
$/app/clusterware/cluvfy/runcluvfy.sh stage -pre dbcfg -n node1,node2 -d /oracle/product/Oracel -verbose
5安裝Oracel Grid Infrastrue
如果在IBM上安裝,先執行預安裝腳本:
login as root then run rootpre.sh on all nodes;
#su – grid
Bourne or Korn shell:
$ DISPLAY=local_host:0.0 ; export DISPLAY
C shell:
% setenv DISPLAY local_host:0.0
安裝選項:
安裝類型選擇:
選擇語言:
配置SCAN
配置Cluster節點資訊:
選擇特定的網路介面:

存儲選擇:
創建ASM 磁片組:
特定的管理組:
選擇安裝位置:
選擇特定的OraInventory
執行預安裝檢查:
安裝匯總:
按提示執行腳步:

若要創建額外的磁片組:
Login as grid user:
$asmca
注意需配置 asm &database compatibility
ALTER DISKGROUP DATADG1 MODIFY TEMPLATE DATAFILE ATTRIBUTES(FINE);
ALTER DISKGROUP DATADG1 MODIFY TEMPLATE TEMPFILE ATTRIBUTES(FINE);

6安裝Oracle Database
#su - oracle
./runInstaller
配置安全更新:
安裝選項:
Grid安裝選項:
安裝類型選擇:
選擇語言:
選擇資料庫版本:
選擇安裝位置:
選擇特定管理組:
預安裝檢查:
安裝匯總:
按提示執行腳本:
#創建資料庫
#dbca &

7卸載Oracle軟體
1.首先運行deconfig
/u01/app/crs_home/crs/install/rootcrs.pl -deconfig –force
2.清除存儲的磁片頭資訊
dd if=/dev/zero of=/dev/dm-9 bs=8192 count=16384
dd if=/dev/zero of=/dev/dm-10 bs=8192 count=16384
dd if=/dev/zero of=/dev/dm-11 bs=8192 count=16384
3.刪除相關目錄和檔
rm -rf /var/opt/oracle
rm -rf /u01/app/*
rm -rf /tmp/.oracle
rm -rf /tmp/OraInstall*
rm -rf /etc/oratab
rm -rf /opt/oracle
4.重建相應的目錄
mkdir -p /u01/app/crs_base
mkdir -p /u01/app/crs_home
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R root:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/app/crs*
chmod -R 775 /u01
chmod -R 755 /u01/app/crs*

5RHEL需重建ASM磁片
service oracleasm createdisk CRS01D001 /dev/dm-12
chown grid:asmadmin /dev/dm*

8   RAC 的簡單管理
1.狀態檢查:
rx2600-> su - grid -c "crs_stat -t -v"

Name           Type           R/RA   F/FT   Target    State     Host       
----------------------------------------------------------------------
ora.CRS1.dg    ora....up.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora.DATA1.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    rx2600     
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    rx2600     
ora.eons       ora.eons.type 0/3    0/     ONLINE    ONLINE    rx2600     
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    rx2600     
ora.oc4j       ora.oc4j.type 0/5    0/0    OFFLINE   OFFLINE              
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    rx2600     
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rx2600     
ora....00.lsnr application    0/5    0/0    ONLINE    ONLINE    rx2600     
ora.rx2600.gsd application    0/5    0/0    OFFLINE   OFFLINE              
ora.rx2600.ons application    0/3    0/0    ONLINE    ONLINE    rx2600     
ora.rx2600.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    rx2600     
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rx3600     
ora....00.lsnr application    0/5    0/0    ONLINE    ONLINE    rx3600     
ora.rx3600.gsd application    0/5    0/0    OFFLINE   OFFLINE              
ora.rx3600.ons application    0/3    0/0    ONLINE    ONLINE    rx3600     
ora.rx3600.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    rx3600     
ora.scan1.vip ora....ip.type 0/0    0/0    ONLINE    ONLINE    rx2600


2.驗證集群化資料庫已開啟
$ su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"

rx2600-> su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"


--------------------------------------------------------------------------------
NAME           TARGET STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS1.dg
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.DATA1.dg
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.LISTENER.lsnr
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.asm
               ONLINE ONLINE       rx2600                   Started            
               ONLINE ONLINE       rx3600                   Started            
ora.eons
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.gsd
               OFFLINE OFFLINE      rx2600                                      
               OFFLINE OFFLINE      rx3600                                      
ora.net1.network
               ONLINE ONLINE       rx2600                                      
               ONLINE ONLINE       rx3600                                      
ora.ons
               ONLINE ONLINE       rx2600                                      
              ONLINE ONLINE       rx3600                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE ONLINE       rx2600                                      
ora.dbrac.db
      1        ONLINE ONLINE       rx3600                   Open               
      2        ONLINE ONLINE       rx2600                   Open               
ora.oc4j
      1        OFFLINE OFFLINE                                                  
ora.rx2600.vip
      1        ONLINE ONLINE       rx2600                                      
ora.rx3600.vip
      1        ONLINE  ONLINE       rx3600                                      
ora.scan1.vip
      1        ONLINE ONLINE       rx2600                                      

3.檢查cluster狀態:
rx2600-> crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
4.驗證資料庫狀態
rx2600-> srvctl status database -d dbrac
Instance dbrac1 is running on node rx2600
Instance dbrac2 is running on node rx3600

rx2600-> srvctl status instance -d dbrac -i dbrac1
Instance dbrac1 is running on node rx2600
rx2600-> srvctl status instance -d dbrac -i dbrac2
Instance dbrac2 is running on node rx3600
5.驗證應用狀態
rx2600-> srvctl status nodeapps
VIP rx2600-vip is enabled
VIP rx2600-vip is running on node: rx2600
VIP rx3600-vip is enabled
VIP rx3600-vip is running on node: rx3600
Network is enabled
Network is running on node: rx2600
Network is running on node: rx3600
GSD is enabled
GSD is not running on node: rx2600
GSD is not running on node: rx3600
ONS is enabled
ONS daemon is running on node: rx2600
ONS daemon is running on node: rx3600
eONS is enabled
eONS daemon is running on node: rx2600
eONS daemon is running on node: rx3600

6.節點應用程式 (配置)
rx2600-> srvctl config nodeapps
VIP exists.:rx2600
VIP exists.: /rx2600-vip/15.70.146.29/255.0.0.0/lan0
VIP exists.:rx3600
VIP exists.: /rx3600-vip/15.70.146.39/255.0.0.0/lan0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 15801, multicast IP address 234.7.2.206, listening port 2016


7.資料庫 (配置)
rx2600-> srvctl config database -d dbrac -a
Database unique name: dbrac
Database name: dbrac
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA1/dbrac/spfiledbrac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: dbrac
Database instances: dbrac1,dbrac2
Disk Groups: DATA1
Services:
Database is enabled
Database is administrator managed

8ASM —(狀態和配置)
rx2600-> srvctl status asm
ASM is running on rx2600,rx3600

rx2600-> srvctl config asm -a
ASM home: /u01/app/crs_home
ASM listener: LISTENER
ASM is enabled.


9TNS 監聽器 (狀態和配置)
rx2600-> srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rx2600,rx3600

rx2600-> srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:
 /u01/app/crs_home on node(s) rx3600,rx2600
End points: TCP:1521
rx2600->

10SCAN —(狀態和配置)
rx2600-> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rx2600

rx2600-> srvctl config scan
SCAN name: rx-cluster-scan, Network: 1/15.0.0.0/255.0.0.0/lan0
SCAN VIP name: scan1, IP: /rx-cluster-scan/15.70.146.11

本文出自 alexatrebooting 博客,請務必保留此出處http://alexatrebooting.blog.51cto.com/3219360/862232


沒有留言:

LinkWithin-相關文件

Related Posts Plugin for WordPress, Blogger...