Oracle 9i release 2 (9.2.0.5) RAC on Fedora Core 1
Contents
- 1 What you need
- 2 On both nodes check packages
- 3 On both nodes - gcc
- 4 On Node1 create empty files
- 5 On both nodes bind rawdevices
- 6 On Node1 create nbd service
- 7 On node2 create nbd service
- 8 On both nodes create test scripts
- 9 Oracle preinstallation
- 10 Oracle Installation
- 10.1 On both nodes login as root in GUI and apply patch p3006854_9204_LINUX.zip
- 10.2 On node1 install Oracle Cluster Manager software
- 10.3 On both nodes, Install Version 10.1.0.2 of the Oracle Universal Installer
- 10.4 On both nodes, run the 10.1.0.2 Oracle Universal Installer to patch the Oracle Cluster Manager (ORACM) to 9.2.0.5
- 10.5 On node1 modify the ORACM configuration files to utilize the hangcheck-timer
- 10.6 On both nodes modify Oracle Cluster Manager Startup Script
- 10.7 On both nodes start the ORACM (Oracle Cluster Manager)
- 10.8 On both nodes Install 9.2.0.4 RAC Database
- 10.9 On both nodes patch the RAC Installation to 9.2.0.5
- 10.10 On both nodes create srvConfig.loc
- 10.11 On both nodes start the GSD (Global Service Daemon)
- 10.12 On both nodes Create Listener Using command netca
- 10.13 On both nodes, make sure all raw devices are working
- 10.14 Only from node1 create a RAC Database using command dbca (Oracle Database Configuration Assistant)
- 10.15 Administering Real Application Clusters Instances
- 11 References
- 12 Author Details
What you need[edit]
- Oracle9i Release 2 (9.2.0.4) [total 3 CDs] from http://www.oracle.com/technology/software/products/oracle9i/index.html
- Patch p3006854_9204_LINUX.zip from Metalink or from http://www.idevelopment.info/data/Oracle/DBA_tips/Linux/FedoraCore2_RPMS/p3006854_9204_LINUX.zip
- The 9.2.0.5 Patch Set from Metalink
- Fedora Core 1 Three CDs
- nbd-server and nbd-client (Download from http://www.badongo.com/file.php?file=9iRAC__2006-01-05_9iRACSupportFiles.tar.gz
- hangecheck-timer.o The attached hangecheck-timer.o will NOT work with SMPS kernels. (Download from http://www.badongo.com/file.php?file=9iRAC__2006-01-05_9iRACSupportFiles.tar.gz
- 2 machines with around 60GB harddisk 512MB RAM for linux (Faster is better)
- optionally 1 windows machine with putty will help a lot (http://www.google.co.in/search?hl=en&q=download+putty&meta=). You can remotely login to the linux boxes and copy paste commands on putty console.
On both nodes check packages[edit]
Install Fedora Core 1 ``all packages``
rpm -q libpng gnome-libs compat-libstdc++ compat-libstdc++-devel rpm -q compat-db compat-gcc compat-gcc-c++ openmotif21 pdksh sysstat
If above mentioned packages are not found, install them
rpm -Uvh gnome-libs-1.4.1.2.90-40.i386.rpm compat-libstdc++-7.3-2.96.126.i386.rpm rpm -Uvh compat-libstdc++-devel-7.3-2.96.126.i386.rpm compat-db-4.1.25-2.1.i386.rpm rpm -Uvh compat-gcc-7.3-2.96.126.i386.rpm compat-gcc-c++-7.3-2.96.126.i386.rpm rpm -Uvh openmotif21-2.1.30-9.i386.rpm pdksh-5.2.14-24.i386.rpm sysstat-5.0.1-2.i386.rpm rpm -Uvh libpng10-1.0.13-13.i386.rpm tcl-devel-8.4.5-7.i386.rpm tcl-8.4.5-7.i386.rpm
Remove iptables and kerberose rlogin
rpm -e iptables redhat-config-securitylevel-tui iptables-devel iptables-ipv6 rpm -e redhat-config-securitylevel firstboot krb5-workstation
Download 9iRACSupportFiles.tar.gz into /software from http://www.badongo.com/file.php?file=9iRAC__2006-01-05_9iRACSupportFiles.tar.gz
mkdir /software cd /software tar – class=SpellE>zxvf 9iRACSupportFiles.tar.gz cd 9iRACSupportFiles cp nbd* /bin cp hangcheck-timer.o /
On both nodes - gcc[edit]
mv /usr/bin/gcc /usr/bin/gcc323 mv /usr/bin/g++ /usr/bin/g++323 ln -s /usr/bin/gcc296 /usr/bin/gcc ln -s /usr/bin/g++296 /usr/bin/g++
On Node1 create empty files[edit]
mkdir /nbd cd /nbd dd if=/dev/zero of=/nbd/system_raw bs=1M count=2048 dd if=/dev/zero of=/nbd/users_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/temp_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/undo_1_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/undo_2_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/indx_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/tools_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/controlfile_1_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/controlfile_2_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/redo1_1_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/redo1_2_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/redo2_1_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/redo2_2_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/spfile_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/srvctl_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/nm_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/drsys_1_raw bs=1M count=1024 dd if=/dev/zero of=/nbd/CMQuorumFile bs=1M count=2048 dd if=/dev/zero of=/nbd/srvm bs=1M count=2048 chmod 777 * chmod 777 /dev/nb* chmod 777 /dev/raw* chmod 777 /dev/raw/raw*
On both nodes bind rawdevices[edit]
Add following lines to /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/nb1 /dev/raw/raw2 /dev/nb2 /dev/raw/raw3 /dev/nb3 /dev/raw/raw4 /dev/nb4 /dev/raw/raw5 /dev/nb5 /dev/raw/raw6 /dev/nb6 /dev/raw/raw7 /dev/nb7 /dev/raw/raw8 /dev/nb8 /dev/raw/raw9 /dev/nb9 /dev/raw/raw10 /dev/nb10 /dev/raw/raw11 /dev/nb11 /dev/raw/raw12 /dev/nb12 /dev/raw/raw13 /dev/nb13 /dev/raw/raw14 /dev/nb14 /dev/raw/raw15 /dev/nb15 /dev/raw/raw16 /dev/nb16 /dev/raw/raw17 /dev/nb17 /dev/raw/raw18 /dev/nb18 /dev/raw/raw19 /dev/nb19
And make rawdevices to start on boot
chkconfig rawdevices on service rawdevices start
On Node1 create nbd service[edit]
Create file /etc/init.d/nbd with following content
#!/bin/bash # # chkconfig: 2345 98 15 # description: nbd client # source function library . /etc/init.d/functions export PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/usr/games:/opt/gnome/bin:/opt/kde3/bin: # start () { echo -n 'Configuring kernel modules nbd, softdog, hangcheck-timer: ' daemon modprobe nbd; modprobe softdog soft_margin=60 soft_noboot=1; rmmod hangcheck-timer; insmod /hangcheck-timer.o hangcheck_tick=30 hangcheck_margin=180; echo # echo -n 'Setting Up Kernel Parameters: ' daemon sysctl -p echo # sleep 1 #Check if nbd-server is already running if [ `ps ax | grep nbd-server | wc -l` -gt 1 ] then echo -n 'nbd server is already running: ' echo_failure echo else echo -n $"Starting nbd server: " daemon nbd-server 4101 /nbd/system_raw; nbd-server 4102 /nbd/users_raw; nbd-server 4102 /nbd/users_raw; nbd-server 4103 /nbd/temp_raw ; nbd-server 4104 /nbd/undo_1_raw ; nbd-server 4105 /nbd/undo_2_raw; nbd-server 4106 /nbd/indx_raw ; nbd-server 4107 /nbd/tools_raw ; nbd-server 4108 /nbd/controlfile_1_raw; nbd-server 4109 /nbd/controlfile_2_raw ; nbd-server 4110 /nbd/redo1_1_raw ; nbd-server 4111 /nbd/redo1_2_raw; nbd-server 4112 /nbd/redo2_1_raw ; nbd-server 4113 /nbd/redo2_2_raw ; nbd-server 4114 /nbd/spfile_raw ; nbd-server 4115 /nbd/srvctl_raw; nbd-server 4116 /nbd/nm_raw ; nbd-server 4117 /nbd/drsys_1_raw; nbd-server 4118 /nbd/CMQuorumFile ; nbd-server 4119 /nbd/srvm echo fi # sleep 5 #Check if nbd-client is already running if [ `ps ax | grep nbd-client | wc -l` -gt 1 ] then echo -n 'nbd client is already running: ' echo_failure echo else echo -n $'Starting nbd client: ' daemon nbd-client node1 4101 /dev/nb1; nbd-client node1 4102 /dev/nb2; nbd-client node1 4103 /dev/nb3; nbd-client node1 4104 /dev/nb4; nbd-client node1 4105 /dev/nb5; nbd-client node1 4106 /dev/nb6; nbd-client node1 4107 /dev/nb7; nbd-client node1 4108 /dev/nb8; nbd-client node1 4109 /dev/nb9; nbd-client node1 4110 /dev/nb10; nbd-client node1 4111 /dev/nb11; nbd-client node1 4112 /dev/nb12; nbd-client node1 4113 /dev/nb13; nbd-client node1 4114 /dev/nb14; nbd-client node1 4115 /dev/nb15; nbd-client node1 4116 /dev/nb16; nbd-client node1 4117 /dev/nb17; nbd-client node1 4118 /dev/nb18; nbd-client node1 4119 /dev/nb19 echo fi # } stop () { # echo -n $'Shutting down nbd client: ' killproc nbd-client echo # echo -n $'Shutting down nbd server: ' killproc nbd-server echo # } # status () { # if [ `ps ax | grep nbd-server | wc -l` -le 1 ] then echo 'nbd server is stopped ' else echo -n 'nbd server is running' echo_success echo echo 'nbd-server process count: ' `ps ax | grep nbd-server | wc -l` fi # if [ `ps ax | grep nbd-client | wc -l` -le 1 ] then echo 'nbd client is stopped ' else echo -n 'nbd client is running' echo_success echo echo 'nbd-clients process count: ' `ps ax | grep nbd-client | wc -l` fi # } case '$1' in start) start ;; stop) stop ;; status) status;; restart) stop start ;; *) echo $'Usage: $0 {start|stop|restart|status}' exit 1 ;; esac
Add service to startup and start it
chmod 755 /etc/init.d/nbd chkconfig nbd --add chkconfig nbd on service nbd start
On node2 create nbd service[edit]
Create file /etc/init.d/nbd with following content
#!/bin/bash # # chkconfig: 2345 98 15 # description: nbd client service used in Oracle Cluster Installation # source function library . /etc/init.d/functions export PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/usr/games:/opt/gnome/bin:/opt/kde3/bin: start () { echo -n 'Configuring kernel modules nbd, softdog, hangcheck-timer: ' daemon modprobe nbd; modprobe softdog soft_margin=60 soft_noboot=1; rmmod hangcheck-timer; insmod /hangcheck-timer.o hangcheck_tick=30 hangcheck_margin=180; echo echo -n 'Setting Up Kernel Parameters: ' daemon sysctl -p echo sleep 1 #Check if nbd-client is already running if [ `ps ax | grep nbd-client | wc -l` -gt 1 ] then echo -n 'nbd client is already running: ' echo_failure echo else echo -n $'Starting nbd client: ' daemon nbd-client node1 4101 /dev/nb1; nbd-client node1 4102 /dev/nb2; nbd-client node1 4103 /dev/nb3; nbd-client node1 4104 /dev/nb4; nbd-client node1 4105 /dev/nb5; nbd-client node1 4106 /dev/nb6; nbd-client node1 4107 /dev/nb7; nbd-client node1 4108 /dev/nb8; nbd-client node1 4109 /dev/nb9; nbd-client node1 4110 /dev/nb10; nbd-client node1 4111 /dev/nb11; nbd-client node1 4112 /dev/nb12; nbd-client node1 4113 /dev/nb13; nbd-client node1 4114 /dev/nb14; nbd-client node1 4115 /dev/nb15; nbd-client node1 4116 /dev/nb16; nbd-client node1 4117 /dev/nb17; nbd-client node1 4118 /dev/nb18; nbd-client node1 4119 /dev/nb19 echo fi } # stop () { echo -n $'Shutting down nbd client: ' killproc nbd-client echo } # status () { if [ `ps ax | grep nbd-client | wc -l` -le 1 ] then echo 'nbd client is stopped ' else echo -n 'nbd client is running' echo_success echo echo 'nbd-clients process count: ' `ps ax | grep nbd-client | wc -l` fi } # case '$1' in start) start ;; stop) stop ;; status) status;; restart) stop start ;; *) echo $'Usage: $0 {start|stop|restart|status}' exit 1 ;; esac
Add service to startup and start it
chmod 755 /etc/init.d/nbd chkconfig nbd --add chkconfig nbd on service nbd start
On both nodes create test scripts[edit]
Create /bin/rawtest with following content
alias ls=ls --color=tty cd /orac/ for F in `ls -w 1` ; do echo 'File $F'; dd if=$F of=/dev/null count=10; done
Create /bin/rlinks with following content
ln -s /dev/raw/raw1 /orac/system_raw ln -s /dev/raw/raw2 /orac/users_raw ln -s /dev/raw/raw3 /orac/temp_raw ln -s /dev/raw/raw4 /orac/undo_1_raw ln -s /dev/raw/raw5 /orac/undo_2_raw ln -s /dev/raw/raw6 /orac/indx_raw ln -s /dev/raw/raw7 /orac/tools_raw ln -s /dev/raw/raw8 /orac/controlfile_1_raw ln -s /dev/raw/raw9 /orac/controlfile_2_raw ln -s /dev/raw/raw10 /orac/redo1_1_raw ln -s /dev/raw/raw11 /orac/redo1_2_raw ln -s /dev/raw/raw12 /orac/redo2_1_raw ln -s /dev/raw/raw13 /orac/redo2_2_raw ln -s /dev/raw/raw14 /orac/spfile_raw ln -s /dev/raw/raw15 /orac/srvctl_raw ln -s /dev/raw/raw16 /orac/nm_raw ln -s /dev/raw/raw17 /orac/drsys_1_raw ln -s /dev/raw/raw18 /orac/CMQuorumFile ln -s /dev/raw/raw19 /orac/srvm chown oracle:dba /orac/*
Make scripts executable
chmod 755 /bin/rawtest chmod 755 /bin/rlinks
Oracle preinstallation[edit]
[edit]
Login as root and run following commands
echo 'kernel.shmmax = 2147483648' >> /etc/sysctl.conf echo 'kernel.shmmni = 128' >> /etc/sysctl.conf echo 'kernel.shmall = 2097152' >> /etc/sysctl.conf echo 'kernel.sem = 250 32000 100 128' >> /etc/sysctl.conf echo 'fs.file-max = 65536' >> /etc/sysctl.conf echo 'net.ipv4.ip_local_port_range = 1024 65000' >> /etc/sysctl.conf sysctl -p echo 'oracle soft nofile 65536' >> /etc/security/limits.conf echo 'oracle hard nofile 65536' >> /etc/security/limits.conf echo 'oracle soft nproc 16384' >> /etc/security/limits.conf echo 'oracle hard nproc 16384' >> /etc/security/limits.conf groupadd oinstall groupadd dba groupadd oper groupadd apache useradd -g dba -G oinstall oracle ; useradd -g dba -G oinstall apache usermod oracle -G root chmod -R 775 /dev mkdir /orac chmod 777 /dev/nb* chmod 777 /dev/raw* chmod 777 /dev/raw/* chown oracle:dba /dev/nb* chown oracle:dba /dev/raw* chown oracle:dba /dev/raw/* chown -R oracle:dba /orac cp /etc/redhat-release /etc/redhat-release.bak echo redhat-3 > /etc/redhat-release mkdir -p /u01/app/oracle/product/9.2.0.1.0 chown -R oracle:dba /u01 mkdir /var/opt/oracle touch /var/opt/oracle/srvConfig.loc chown -R oracle:dba /var/opt/oracle chmod -R 775 /u01 passwd oracle; passwd apache
Type password for oracle and apache user
Only on node1 dataFile[edit]
Login as root and run following command
cat > /orac/datafiles.conf <<EOF system=/orac/system_raw users=/orac/users_raw temp=/orac/temp_raw undotbs1=/orac/undo_1_raw undotbs2=/orac/undo_2_raw indx=/orac/indx_raw tools=/orac/tools_raw control1=/orac/controlfile_1_raw control2=/orac/controlfile_2_raw redo1_1=/orac/redo1_1_raw redo1_2=/orac/redo1_2_raw redo2_1=/orac/redo2_1_raw redo2_2=/orac/redo2_2_raw spfile=/orac/spfile_raw srvconfig_loc=/orac/srvctl_raw EOF
On both nodes setup Oracle environment[edit]
Add following lines to oracle's .bash_profile on both nodes and make sure that oracle_sid should be unique
Example on node1 “ORACLE_SID=test1; export ORACLE_SID” and on node2 “ORACLE_SID=test2; export ORACLE_SID”
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/9.2.0.1.0; export ORACLE_HOME ORACLE_TERM=xterm; export ORACLE_TERM PATH=$ORACLE_HOME/bin:$PATH; export PATH ORACLE_OWNER=oracle; export ORACLE_OWNER ORACLE_SID=test1; export ORACLE_SID LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH LD_ASSUME_KERNEL=2.4.1; export LD_ASSUME_KERNEL THREADS_FLAG=native; export THREADS_FLAG TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR DBCA_RAW_CONFIG=/orac/datafiles.conf
On both nodes setup RSH[edit]
Check if rsh-server is installed with command
rpm –q rsh-server
If it is not there, install it from cd with command
rpm -ivh rsh-server-0.17-19.i386.rpm
chkconfig rsh on chkconfig rlogin on chkconfig xinetd on service xinetd restart touch /etc/hosts.equiv chmod 600 /etc/hosts.equiv chown root.root /etc/hosts.equiv echo '+node1 oracle' >> /etc/hosts.equiv echo '+node2 oracle' >> /etc/hosts.equiv
On both nodes configure /etc/hosts[edit]
Give unique static IP addresses to both node1 and node2. Edit /etc/hosts of both the node and add there IP addresses in /etc/hosts
e.g.
127.0.0.1 localhost localhost.localdomain 10.1.1.1 node1 10.1.1.2 node2
Test network settings[edit]
From node1 try to ping node1 (i.e. ping itself) and make sure that it is pinging to correct IP and NOT 127.0.0.1 From node1 try to ping to node2 and make sure it is pining to correct ipaddress of node2 From node2 try to ping node2 (i.e. ping itself) and make sure that it is pinging to correct IP and NOT 127.0.0.1 From node2 try to ping to node1 and make sure it is pining to correct ipaddress of node1 on node1, run command hostname and make sure it returns node1 and not localhost or localhost.localdomain if you are getting someting else, change HOSTNAME=node1 in /etc/sysconfig/network and reboot system on node2, run command hostname and make sure it returns node2 and not localhost or localhost.localdomain if you are getting someting else, change HOSTNAME=node2 in /etc/sysconfig/network and reboot system on node1 login as root and run 2 commands su - oracle followed by rsh node2 and you should be able to login to node2 without any password on node2 login as root and run 2 commands su - oracle followed by rsh node1 and you should be able to login to node1 without any password
Oracle Installation[edit]
On both nodes login as root in GUI and apply patch p3006854_9204_LINUX.zip[edit]
Go to directory where patch is downloaded
unzip p3006854_9204_LINUX.zip cd 3006854/ sh rhel3_pre_install.sh
On node1 install Oracle Cluster Manager software[edit]
Login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, mount cdrom and start installation
To do this, in root user GUI terminal run following commands,
xhost + su - oracle mount /mnt/cdrom cd /mnt/cdrom/Disk1 ./runInstaller
At the 'Welcome Screen', click Next. If the 'Inventory Location' screen appears, enter the inventory location then click OK. If the 'Unix Group Name' screen appears, enter the unix group name dba then click Next. At this point you may be prompted to run /tmp/orainstRoot.sh on both nodes. Run this (on both nodes if prompted) and click Continue. At the 'File Locations Screen', verify the destination listed is your ORACLE_HOME directory. Also enter a name ORACLE_HOME. At the 'Available Products Screen', Check 'Oracle Cluster Manager'. Click Next. At the public node information screen, enter the public node names node1 and node2. Click Next. At the private node information screen, enter the interconnect node names node1 and node2. Click Next. Enter the full name of the file or raw device /orac/CMQuorumFileQuorum. Click Next. Press Install at the summary screen. You will now briefly get a progress window followed by the end of installation screen. Click Exit and confirm by clicking Yes.
On both nodes, Install Version 10.1.0.2 of the Oracle Universal Installer[edit]
From root user login, Create a directory /software/9205 Download the 9.2.0.5 patchset from MetaLink - Patches: Enter 3501955 in the Patch Number field. move the patch p3501955_9205_LINUX.zip in /software/9205 Unzip it and change permission to 777 of /software/9205 so that oracle user can use it. login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, got to /software/9205/Disk1 and start installation
To do this, in root user GUI terminal run following commands,
xhost + mkdir -p /software/9205 mv /path/to/p3501955_9205_LINUX.zip /software/9205 cd /software/9205 unzip p3501955_9205_LINUX.zip cpio -idmv < 9205_lnx32_release.cpio chmod -R 777 /software su - oracle cd /software/9205/Disk1 ./runInstaller
At the 'Welcome Screen', click Next.
At the 'File Locations Screen', Change the $ORACLE_HOME name from the dropdown list to ORACLE_HOME. Click Next.
On the 'Available Products Screen', Check 'Oracle Universal Installer 10.1.0.2'. Click Next.
Press Install at the summary screen.
You will now briefly get a progress window followed by the end of installation screen. Click Exit and confirm by clicking Yes.
Remember to install the 10.1.0.2 Installer on ALL cluster nodes.
Note that you may need to change the 9.2 $ORACLE_HOME name to 'ORACLE_HOME', on the 'File Locations Screen' for other nodes.
It will ask if you want to specify a non-empty directory, say 'Yes'.
On both nodes, run the 10.1.0.2 Oracle Universal Installer to patch the Oracle Cluster Manager (ORACM) to 9.2.0.5[edit]
Login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, got to /software/9205/Disk1 and start installation
To do this, in root user GUI terminal run following commands,
xhost + su - oracle cd /software/9205/Disk1 ./runInstaller
At the 'Welcome Screen', click Next.
At the 'File Locations Screen', make sure the source location is to the products.xml file in the 9.2.0.5 patchset location under /software/9205/Disk1.
Also verify the destination listed is your ORACLE_HOME directory. Change the $ORACLE_HOME name from the dropdown list to 'ORACLE_HOME'. Click Next.
At the 'Available Products Screen', Check 'Oracle9iR2 Cluster Manager 9.2.0.5.0'. Click Next.
At the public node information screen, enter the public node names node1 and node2.click Next.
At the private node information screen, enter the interconnect node names node1 and node2. Click Next.
Click Install at the summary screen. You will now briefly get a progress window followed by the end of installation screen.
Click Exit and confirm by clicking Yes.
On node1 modify the ORACM configuration files to utilize the hangcheck-timer[edit]
Login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, cd to directory $ORACLE_HOME/oracm/admin/ Edit file cmcfg.ora
Make sure the contents of cmcfg.ora are as follows
ClusterName=Oracle Cluster Manager, version 9i MissCount=250 PrivateNodeNames=node1 node2 PublicNodeNames=node1 node2 ServicePort=9998 CmDiskFile=/orac/CMQuorumFile KernelModuleName=hangcheck-timer HostName=node1
=== On node2 create log directory and Modify the ORACM configuration files to utilize the hangcheck-timer ===
Login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, cd to directory $ORACLE_HOME/oracm/ Create directory log under $ORACLE_HOME/oracm
To do this, in root user GUI terminal run following commands,
xhost + su - oracle mkdir $ORACLE_HOME/oracm/log
cd to directory $ORACLE_HOME/oracm/admin edit file cmcfg.ora
Make sure the contents of cmcfg.ora are as follows
ClusterName=Oracle Cluster Manager, version 9i MissCount=250 PrivateNodeNames=node1 node2 PublicNodeNames=node1 node2 ServicePort=9998 CmDiskFile=/orac/CMQuorumFile KernelModuleName=hangcheck-timer HostName=node2
On both nodes modify Oracle Cluster Manager Startup Script[edit]
Login as root, su - oracle and edit file $ORACLE_HOME/oracm/bin/ocmstart.sh Add following line in the beginning of this file i.e. on line 1
/bin/rm /u01/app/oracle/product/9.2.0.1.0/oracm/log/*
On both nodes start the ORACM (Oracle Cluster Manager)[edit]
On Both Nodes Verify that raw devices are accessible with command rawtest, if you get any error, restart nbd service on node1 first then on node2. login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, run command rawtest
Make sure that you don't get any error like 'Input/output error' cd to directory $ORACLE_HOME/oracm/bin, su to root, ./ocmstart.sh
To do this, in root user GUI terminal run following commands,
xhost + su - oracle rawtest #Make sure that you don't get any error like 'Input/output error' cd $ORACLE_HOME/oracm/bin su root ./ocmstart.sh
Verify that ORACM is running with the following command
ps -ef | grep oracm
Make sure you see many oracm process. If not see error in $ORACLE_HOME/oracm/log/cm.log
Also verify that the ORACM version is the same on each node
# cd $ORACLE_HOME/oracm/log # head -1 cm.log oracm, version[ 9.2.0.2.0.49 ] started {Fri May 14 09:22:28 2004 }
On both nodes Install 9.2.0.4 RAC Database[edit]
Note: Due to bug 3547724, temporarily create a symbolic link /oradata directory pointing to an oradata directory with space available as root prior to running the RAC install:
# mkdir -p /u04/oradata # chmod 777 /u04/oradata # ln -s /u04/oradata /oradata
login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, mount cdrom, go to /mnt/cdrom and start installation
To do this, in root user GUI terminal run following commands,
xhost + su - oracle mount /mnt/cdrom cd /mnt/cdrom/Disk1 ./runInstaller
At the 'Welcome Screen', click Next.
At the 'Cluster Node Selection Screen', make sure that all RAC nodes are selected.
At the 'File Locations Screen', verify the destination listed is your ORACLE_HOME directory and that the source directory is pointing to the products.jar from the 9.2.0.4 cd or staging location.
At the 'Available Products Screen', check 'Oracle 9i Database 9.2.0.4'. Click Next.
At the 'Installation Types Screen', check 'Enterprise Edition', click Next.
At the 'Database Configuration Screen', check 'Software Only'. Click Next.
At the 'Shared Configuration File Name Screen', enter the path of the CFS or NFS srvm raw device /orac/srvm. Click Next.
Click Install at the summary screen. Note that some of the items installed will say '9.2.0.1' for the version, this is normal because only some items needed to be patched up to 9.2.0.4.
You will now get a progress window, run root.sh when prompted.
You will then see the end of installation screen. Click Exit and confirm by clicking Yes.
Note: You can now remove the /oradata symbolic link:
# rm /oradata
On both nodes patch the RAC Installation to 9.2.0.5[edit]
Login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, got to /software/9205/Disk1 and start installation
To do this, in root user GUI terminal run following commands,
xhost + su - oracle cd /software/9205/Disk1 ./runInstaller
At the 'Welcome Screen', click Next.
View the 'Cluster Node Selection Screen', click Next.
At the 'File Locations Screen', make sure the source location is to the products.xml file in the 9.2.0.5 patchset location under Disk1/stage.
Verify the destination listed is your ORACLE_HOME directory. Change the $ORACLE_HOME name from the dropdown list to the 9.2 $ORACLE_HOME name. Click Next.
At the 'Available Products Screen', Check 'Oracle9iR2 PatchSets 9.2.0.5.0'. Click Next.
Click Install at the summary screen.
You will now get a progress window, run root.sh when prompted.
You will then see the end of installation screen. Click Exit and confirm by clicking Yes.
On both nodes create srvConfig.loc[edit]
Login as root and run following commands
mkdir -p /var/opt/oracle echo 'srvconfig_loc=/orac/srvm' > /var/opt/oracle/srvConfig.loc chown -R oracle:dba /var/opt/oracle chmod -R 755 /var/opt/oracle
On both nodes start the GSD (Global Service Daemon)[edit]
Login as root and run commands
su - oracle gsdctl start
You should see 'Successfully started GSD on local node'
Then check the status with command
gsdctl stat
You should see 'GSD is running on the local node'
If the GSD does not stay up, try running 'srvconfig -init -f' from the OS prompt.
On both nodes Create Listener Using command netca[edit]
Login as 'root' in GUI, open terminal, and run following commands
xhost + su - oracle netca
Slect 'Cluster Configuration'. Click Next
Click 'Select all nodes'. Click Next
Click 'Listener configuration'.Click Next
Click 'Add'.Click Next
Lister name:LISTENER. Click Next
Selected Protocols: TCP. Click Next
Click 'Use the standard port number of 1521'. Click Next
Click on 'No'. Click Next
Click Next
Click 'Finish'
On both nodes, make sure all raw devices are working[edit]
On Both Nodes Verify that raw devices are accessible with command rawtest, if you get any error, restart nbd service on node1 first then on node2.
login as 'root' in GUI, open terminal, run 'xhost +', su to oracle, run command rawtest
Make sure that you don't get any error like 'Input/output error'
Only from node1 create a RAC Database using command dbca (Oracle Database Configuration Assistant)[edit]
Login as 'root' in GUI, open terminal, and run following commands
xhost + su - oracle dbca
Choose Oracle Cluster Database option and select Next.
The Operations page is displayed. Choose the option Create a Database and click Next.
The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next.
The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next. Note: The Show Details button provides information on the database template selected.
DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID) as test.
The Database Options page is displayed. Select the options you wish to configure and then choose Next. Note: If you did not choose New Database from the Database Template page, you will not see this screen.
Select the connection options desired from the Database Connection Options page. Click Next.
DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. The option Create persistent initialization parameter file is selected by default. Enter raw device name /orac/spfile_raw for the location of the server parameter file (spfile) must be entered. The button File Location Variables… displays variable information. The button All Initialization Parameters… displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters page and select Close. Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization Parameters page are complete and select Next.
DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database.
The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish.
The DBCA Summary window is displayed. Review this information and then click OK. Once you click the OK button and the summary screen is closed, it may take a few moments for the DBCA progress bar to start. DBCA then begins to create the database according to the values specified.
During the database creation process, you may see the following error:
ORA-29807: specified operator does not exist
This is a known issue (bug 2925665). You can click on the 'Ignore' button to continue. Once DBCA has completed database creation, remember to run the 'prvtxml.plb' script from $ORACLE_HOME/rdbms/admin independently, as the user SYS. It is also advised to run the 'utlrp.sql' script to ensure that there are no invalid objects in the database at this time.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.
Login as root su – oracle and run following command to check if both instances of oracle are running or not
$ srvctl status database -d test
Administering Real Application Clusters Instances[edit]
Oracle Corporation recommends that you use SRVCTL to administer your Real Application Clusters database environment. SRVCTL manages configuration information that is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running after you configure your database. To use SRVCTL, you must have already created the configuration information for the database that you want to administer. You must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command as described below.
To display the configuration details for, example, database test, on nodes node1 and node2 with instances node1 and node2 run:-
Login as root
Su to oracle with command su - oracle
$ srvctl config test
$ srvctl config -p test -n node1 node1 test1 /u01/app/oracle/product/9.2.0.1.0
$ srvctl status database -d test
Instance test1 is running on node node1
Instance test2 is running on node node2
Examples of starting and stopping RAC follow:-
$ srvctl start database -d test $ srvctl stop database -d test $ srvctl stop instance -d test -i node2 $ srvctl start instance -d test -i node2
For further information on srvctl and gsdctl see the Oracle9i Real Application Clusters Administration manual.
References[edit]
- Metalink Document ID: 184821.1
- http://www.puschitz.com/InstallingOracle9iRAC.shtml
Author Details[edit]
Author: Swapnil Durgade Email: swapnil_durgade@yahoo.com Date: 05-Jan-2006