100
Oracle10RAC on SLES9 1 iSCSI and ASM shared media7 Oracle 10g RAC installation for SLES9 system with installation scripts. Shared disk media – ASM on iSCSI on NetApp network server Document version: 1.0 Last modified: 8/5/2005 Alexei Roudnev, senior system and network engineer, Exigengroup USA. (http://www.mydomain.com ) Contact: Alexei_Roudnev @ mydomain.com alex @ Relcom . net Warning. It is not OFFICIAL document. It is not supported, verified or approved by any vendor (SuSe/Novell, Oracle, Network Appliances). Use it on your own risk. Scripts can be taken from: ../SLES9-ORA10RAC-iSCSI-ASM.tgz TABLE OF CONTENT: Introduction............................................ 3 System and network installations........................4 1. Prerequisites and hardware.........................4 2. System installation................................5 3. Filer configuration................................9 4. Images and X11....................................10 4.3) Copy installation scripts on NFS server........13 4.4) X11 access.....................................13 5. iSCSI and raw devices configuration preparing for RAC and ASM installation.................................15 5.1) List of raw devices............................15 5.2) Patching iscsi script..........................16 5.3) Running iscsi and creating volumes on iSCSI....17 5.4) Creating LVM volumes...........................18 5.5) Creating RAW devices...........................20 5.6) Installing rawnames script, turning on iSCSI, and testing how it works thru system reboot.............21 5.7) Now reboot to verify that everything will come well................................................21 6. Prepare CRS installation..........................22 6.1) Login to both nodes............................22 6.2) Verify CONFIG.sh once again....................22

Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

  • Upload
    buitruc

  • View
    226

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 1 iSCSI and ASM shared media7

Oracle 10g RAC installation for SLES9 system with installation scripts.

Shared disk media – ASM on iSCSI on NetApp network serverDocument version: 1.0 Last modified: 8/5/2005

Alexei Roudnev, senior system and network engineer, Exigengroup USA. (http://www.mydomain.com) Contact: Alexei_Roudnev @ mydomain.com

alex @ Relcom . net

Warning. It is not OFFICIAL document. It is not supported, verified or approved by any vendor (SuSe/Novell, Oracle, Network Appliances). Use it on your own risk.Scripts can be taken from: ../SLES9-ORA10RAC-iSCSI-ASM.tgz

TABLE OF CONTENT:Introduction....................................................................................................................3System and network installations...................................................................................4

1. Prerequisites and hardware....................................................................................42. System installation.................................................................................................53. Filer configuration..................................................................................................94. Images and X11....................................................................................................10

4.3) Copy installation scripts on NFS server.......................................................134.4) X11 access....................................................................................................13

5. iSCSI and raw devices configuration preparing for RAC and ASM installation.155.1) List of raw devices........................................................................................155.2) Patching iscsi script......................................................................................165.3) Running iscsi and creating volumes on iSCSI..............................................175.4) Creating LVM volumes................................................................................185.5) Creating RAW devices.................................................................................205.6) Installing rawnames script, turning on iSCSI, and testing how it works thru system reboot.......................................................................................................215.7) Now reboot to verify that everything will come well...................................21

6. Prepare CRS installation......................................................................................226.1) Login to both nodes......................................................................................226.2) Verify CONFIG.sh once again.....................................................................226.3) 010-install-orarun.sh - installs orarun, unlock oracle user and set up password...............................................................................................................226.4) 015-edit-oraprofile.sh - copy our version of oracle profile and edit it (set up SID)......................................................................................................................236.5) 018-edit-rcoracle.sh - copy our version of oracle startup files for Oracle10 cluster...................................................................................................................246.6) 020-start-rcoracle.sh - start rcoracle.............................................................246.7) 030-check-oracle-user.sh - this script verify Oracle user and location of Oracle installation disk.........................................................................................256.8) 060-setup-xntp.sh - set up XNTP service (important to synchronize clock on all nodes)..............................................................................................................266.9) 070-edit-services.sh - script removes conflicting records from /etc/services..............................................................................................................................27

Page 2: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 2 iSCSI and ASM shared media7

6.10) 080-ssh-genkeys.sh - set up password-less access between nodes.............276.11) 090-verify-ssh-access.sh - verify results (password-less access between nodes)...................................................................................................................296.12) Edit and verify /etc/hosts:...........................................................................30

7. Install Cluster Ready Services.............................................................................317.1) 100-verify-rawfiles.sh – verify that oracle have access to shared files/.......317.2) 200-InstCRS.1sh - installing Cluster Ready Services. It is first real INSTALLATION step.........................................................................................327.3) Running root.sh scripts (by 220-runRootsh.sh) on all nodes:.......................397.4) Running 230-check-crs.sh - verify crs..........................................................41

8. Database server installation..................................................................................438.1) 300-check-env.sh- checks environment for Oracle database........................438.2) 310-InstOracle.1sh - starts Oracle Installer for the main database...............438.3) 320-RunRootSh.sh – running root.sh scripts. Patching oracle for x86_64...458.4) 340-run-netca.1sh - run Network Configuration Assistant (on single node)...............................................................................................................................488.5) 350-relink-with-aio.sh – Relink Oracle to turn on Async IO.......................51

9. Creating first DATABASE..................................................................................539.1) Run dbca. (400-create-database.sh) - starts dbca and creates database........549.2) Edit /etc/oratab..............................................................................................69

10. Basic RAC management....................................................................................7210.1) CRS logs.....................................................................................................7210.2) EM Console................................................................................................7310.3) Starting, stopping and managing................................................................7310.4) Verify system shutdown and startup...........................................................74

Problems.......................................................................................................................74References:...................................................................................................................75APPENDIX 1. Modified orarun scripts.......................................................................75APPENDIX 2. List of packages...................................................................................85

Table of scripts.Script 001-patch-iscsi.sh:.............................................................................................16Patch iscsi.patch:..........................................................................................................16File 002.create-lvmgroups.1sh:....................................................................................18File 005-copy-rawnames.sh:........................................................................................21File 010-install-orarun.sh:............................................................................................22File 015-edit-oraprofile.sh:..........................................................................................23File 018-edit-rcoracle.sh:.............................................................................................24File 020-start-rcoracle.sh:............................................................................................24File 030-check-oracle-user.sh:.....................................................................................25File 060-setup-xntp.sh:.................................................................................................26File 070-edit-services.sh:.............................................................................................27File 080-ssh-genkeys.sh:..............................................................................................28File 090-verify-ssh-access.sh:......................................................................................29File 100-verify-rawfiles.sh:..........................................................................................31File 200-InstCRS.1sh:..................................................................................................32File 210-runInventory.sh..............................................................................................34File 210-runInventory.sh:............................................................................................38File 220-runRootsh.sh:................................................................................................39File 230-check-crs.sh:..................................................................................................41

Page 3: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 3 iSCSI and ASM shared media7

File 300-check-env.sh:.................................................................................................43File 310-InstOracle.1sh:...............................................................................................43File 320-RunRootSh.sh:...............................................................................................46File 340-run-netca.1sh:................................................................................................48File 350-relink-with-aio.sh:..........................................................................................51File 400-create-database.sh:.........................................................................................54File 410-edit-oratab.sh:................................................................................................69File /etc/profile.d/oracle.sh (FILE.d/oracle.sh):...........................................................75File /etc/sysconfig/oracle (FILE.d/sysconfig.oracle):..................................................76File /etc/init.d/oracle (FILES.d/init.oracle):.................................................................79

Text styles.

Here is example of output:testrac12# vi /etc/iscsi.conf

Here is example of script source:File xxx.sh:#!/bin/sh. CONFIG.shrciscsi start

Introduction.Installation parameters:

- Database: Oracle 10g- OS: Linux SLES9 SP1- Platform: x86_64 (AMD64 and ET64T). Should fork for i386 as well.- SAN storage: NetApp, iSCSI license;- Storage management: Oracle ASM.- Installation mode: Real Application Cluster, 2 nodes.- Oracle home directory: /opt/oracle- Oracle BASE: /opt/oracle.

Installation use scripts. Scripts are part of our installation library for Oracle and Linux, which was designed as automation tool and allows easy Oracle and Oracle Real Application Cluster installation and reinstallation (with guaranteed results) I believe that installation for i386 do not make any difference.

There is another document, describing installation of RAC over NFS, so we will pay more attention onto iSCSI and ASM details here.

It is not installation manual; it is installation example. I made a few decisions before installation, which simplified my job BUT are not mandatory:

- I use LVM for iSCSI volume naming, not iSCSI ID’s;- I use named raw devices (using new script for raw’s) to simplify Oracle

administration;

Page 4: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 4 iSCSI and ASM shared media7

- I use modified ‘orarun’ scripts. Orarun is standard for SuSe linux, but it changes, so I add orarun RPM into the set of installation files (in ../RPMs directory) , installed it, then replaced key files with my own copy (from FILES.d).

Document below describe Oracle10 Real Application Cluster installation on SLES9 Enterprise Server. It was tested on:

- Linux servers: DELL PowerEdge 2850 server;- NAS server – NetApp FAS270c server, with iSCSI license;- SUSE Linux Enterprise Server 9,, with Service Pack1, for x86-64 platform;- Oracle 10g Release 1 (10.1.0.3) for Linux x86-64;

I recommend using scripts, but if you are skilled Linux and Oracle administrator and want to do everything manually, just follow these sources as an example.

You do not required to read this entire document and investigate all scripts. In reality, you can do everything very fast, in 3 steps:

- Follow prerequisites chapter and prepare installation;- Run scripts one by one, paying attention to scripts which runs in parallel with

Oracle installer (instead of root.sh);- Have everything completed in 2 - 3 hours.

But, of course, the best method is – read document, understand what script is doing, run script.

Notice. I recommend following all tiny details of this manual. You can remove many packages and eliminate some steps, but if it will not work, it makes difficult understanding a reason. For example, I always install all C/C++ development selections and install GUI, and it allows me to eliminate numerous manual package selection and very possible errors. Future ‘orarun’ packages should resolve all dependencies for Oracle.

This is not SIMPLERST RAC + ASM installation, I tried to address few more things here:

- iSCSI cluster on NetApp;- Sharing iSCSI load between 2 controllers;- Few ASM volumes.

You can simplify it by using:- Single iSCSI LUN;- Single LVM group, with 3 LVM volumes: OCRFile, CSSFile, iASM.

I used GUI configuration assistants everywhere when possible, and used OEM WEB interface for database management. Skilled DBA can do most of these actions manually, of course.

System and network installations.1. Prerequisites and hardware.You must have (these all are commercial products, available for evaluation and/or under special development licenses):

Page 5: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 5 iSCSI and ASM shared media7

- SuSe Linux Enterprise Server 9 (better with Service Pack 1); - Oracle 10.1.0.3 for AMD64/ET64 (2 CD or 2 cpio files) with Cluster Ready

Services (1 CD or 1 cpio file). I believe that 10.1.0.4 will not make any difference.

- 2 AMD64/ET64 servers (I used DELL 2850 servers with 2 CPU each);- Network Appliance (NetApp) NAS system with iSCSI license;- 1Gbit Ethernet switch (or VLAN on big enterprise switch).

Logical connection in my example:

testrac12dell 2850

testrac11dell 2850

access vlan

storage vlan

Network ApplianceFAS270c server

router

eth0 eth0

eth1eth1

Filer is connected to storage VLAN (and is connected, in my case, to access VLAN for redundancy). Filer IP are:

- fas-1a-1 (controller 1)- fas-1b-1 (controller 2)

2. System installation.2.1) Allocate names and IP addresses. You will need 4 IP in access network and 2 IP in storage network.

Virtual IP are used by Oracle cluster as service IP and can float from failed server to server which takeover its functions. Add records for primary IP (not virtual) into hosts file (IMPORTANT! I saw problems if IP resolved into fully qualified names, not into single world names).We used in these example:Server eth0-ip eth0 virtual ip eth1-ip

testrac11 testrac11 = 10.25.32.111

testrac11-vip = 10.25.32.113

testrac11-1 = 10.253.23.111

testrac12 testrac12 = 10.25.32.112

testrac12-vip = 10.25.32.114

testrac12-1 = 10.253.23.112

Page 6: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 6 iSCSI and ASM shared media7

Configure these names in DNS:

Host Domain Type Pref Datatestrac11 mydomain.com A 10.25.32.111testrac11-1 mydomain.com A 10.253.32.111testrac11-vip mydomain.com A 10.25.32.113testrac12 mydomain.com A 10.25.32.112testrac12-1 mydomain.com A 10.253.32.112testrac12-vip mydomain.com A 10.25.32.114

IMPORTANT – ASSIGN ALL IP BEFORE INSTALLATION. DOUBLE CHECK THESE NAMES, or DOUBLE CHECK YOUR DNS. 90% of cluster installation problems have been caused by improper host names. I recommend using ‘/etc/hosts’ file for your configuration, see example here:

10.25.32.111 testrac11 testrac11.mydomain.com10.25.32.112 testrac12 testrac12.mydomain.com10.253.32.111 testrac11-1 testrac11-1.mydomain.com10.253.32.112 testrac12-1 testrac12-1.mydomain.com10.25.32.113 testrac11-vip10.25.32.114 testrac12-vip

2.2) Install SuSe Linux Enterprise server 9, with service pack1. Here are my selections:

Page 7: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 7 iSCSI and ASM shared media7

I recommend to install, at least:- Basic runtime system;- Graphical base system;- Linux tools;- LSB Runtime environment;- help and support documentation;- C/C++ development;- Kernel sources (from Various Linux tools).

If you are doing it all very first time, better follow setting above without variations.

2.3) Configure network. 1

1 I found important network issue in case of DELL 2850 servers connected to Cisco 2970 catalist switch – port negotiation took about 20 seconds, after system already treated interfaces as connected. So, I added 30 second delay into /etc/init.d/iscsi script (which I do modify by patch) because these script runs just after network script. You can add ‘sleep 30’ into ‘/etc/init.d/network’ script instead. Verify system reboot after installation, and if you see that system cannot mount NFS disks in boot time, but works fine later, add such delay.

Page 8: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 8 iSCSI and ASM shared media7

Do not forget to EDIT existing interface first (which will have static IP), then add second interface and set it up for back network. See here:

Set up jumbo frames for second interface:

2.4) Upgrade system to ServicePack 1.2

There are many methods to upgrade. You can login as root from console (using GUI, not text mode), insert SP1 disk, and follow prompt. Or you can, instead, select ‘YaST2 -> System -> Patch CD update, and use your copy of SP1 disks as a source.

2 ServicePack 1 is not required but improve performance, so it is recommended.

Page 9: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 9 iSCSI and ASM shared media7

2.5) Create initiator ID.

You must create iSCSI ID on each of the servers. You can do it by ANY of 2 methods:

- start iscsi very first time and allow it to create iscsi id for you:vi /etc/iscsi.conf(addDiscoveryAddress=Filer-IP-1DiscoveryAddress=Filer-IP-2

and runrciscsi startrciscsi stop)

- Create ID manually, editing file:vi /etc/initiatorname.iscsifile should contain only comments and 1 line with the name:

InitiatorName=iqn.1987-05.com.reverse-domain:server-name

I used such names in this installation:testrac11: InitiatorName=iqn.1987-05.com.exigengroup:testrac11.sjclabtestrac12: InitiatorName=iqn.1987-05.com.exigengroup:testrac12.sjclab

This approach provides more readable names. You will need these names on the next step (so do it here, not in iSCSI configuration section).

3. Filer configuration.3

Now we must open NetApp and configure iSCSI lun’s on it. I configured (in these example) 3 iSCSI luns – LUN01 and LUN03 for database , and LUN02 for logs, redo and other purposes (and for CRS and CSS files).

FAS-1a:

LUN Description Size Status MapsGroup : LUN ID

/vol/vol0/LUN01 Disk 36Gb on 2 disk volume 36 GB onlinetestrac   :   0

/vol/vol1/LUN02 Disk 200Gb on multi disk volume 200 GB onlinetestrac   :   1

FAS-1b:

LUN Description Size Status MapsGroup : LUN ID

3 You can simplify this section by configuring single LUN and using it for 3 LVM volumes (OCRFile, CSSFile, iASM). I show configuration with 2 iSCSI controllers and 3 LUNs.

Page 10: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 10 iSCSI and ASM shared media7

/vol/vol0/LUN03 Disk2 on 2disk volume 36 GB onlinetestrac   :   0

I did not configured access control in this example, but configured initiator group as

Group Name:Enter a group name for the initiator group.

testrac

Type:Select a type for the initiator group.

iSCSI

Operating System:Select the operating system type of the initiators in this group.

Linux

Initiators:Enter a list of initiator names, separated by commas, spaces, or newlines. For an FCP initiator group, enter WWPNs (world wide port names). For an iSCSI initiator group, enter iSCSI node names. To force the removal of a mapped initiator, check the Force removal of mapped initiators checkbox.

iqn.1987-05.com.exigengroup:testrac11.sjclabiqn.1987-05.com.exigengroup:testrac12.sjclab

Names must be the same as you configured in ‘/etc/initiatorname.iscsi’ above.

4. Images and X11.4

4.1) System images.

I recommend setting up external NFS image server, and creating installation sources by copying installation disks:

- SLES9 CD1 as SLES9-CORE (copy CD1 into these directory);- SLES9 CD2-CD6 as SLES9-Image (copy CD2-CD6 into this directory

accepting all rewrites);- SLES9 SP1 CD1-3 as SLES9-SP1 (copy SP1 disks into this directory

accepting all rewrites);

For example:

4 It is not absolutely required. You can use CD’s for installation, and copy installation scripts into local disks. But it simplifies everything, and allows reinstalling everything in few hours, if required.

Page 11: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 11 iSCSI and ASM shared media7

After installing basic system, I open YaST2, mount these disks in NFS, and then set up new installation sources:

Now you can install any component, without inserting CDs.

4.2) Prepare Oracle installation disks.

You will need Oracle/DB and Oracle/CRS directories (first with all CD's from Database, second with CD for Cluster Ready Services) from Oracle 10g for Linux x86-64. You can use CD and copy files from them, install from CD directly, or (if you have account and can download) you can download files from Oracle:5

ship.crs.lnxx86-64.cpio          ship.db_Disk2.lnxx86-64.cpio.gz

5 Read and obey all Oracle licenses before downloading files and installing Oracle cluster.

Page 12: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 12 iSCSI and ASM shared media7

ship.db_Disk1.lnxx86-64.cpio.gz

Now expand them (on local disks on onto image server), for example:

  mkdir /image/UNIX/Oracle10 cd /image/UNIX/Oracle10 mkdir CRS mkdir DB cd CRS gzip -d < $DOWNLOAD_FILES/ship.crs.lnxx86-64.cpio | cpio -idmv cd .. mkdir DB cd DB gzip -d < $DOWNLOAD_FILES/ship.db_Disk1.lnxx86-64.cpio.gz | cpio -idmv gzip -d < $DOWNLOAD_FILES/ship.db_Disk2.lnxx86-64.cpio.gz | cpio -idmv

(You must agree with development license if you download these files).

Important. If you use Windows NFS server, do not copy disks on Windows - these CD's contain symlinks, so you must use Unix for unarchiving and copy operations.

Page 13: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 13 iSCSI and ASM shared media7

For example:

4.3) Copy installation scripts on NFS server.

Now, take installation scripts attached to this document, and unpack them on this NFS server. You must have these scripts available from all nodes, for simple installation.

4.4) X11 access.

Oracle installation requires X11 screen (X11 is Unix windows system). There are many X11 systems available (and you always have one when are working from Linux console in graphical mode). I usually use ‘CYGWIN’ (http://www.cygwin.com/) package for Windows, when I must use Microsoft windows screen, installing:

- Basic cygwin set;- Basic X11 set;- Open SSH;

After it, you can login into SuSe server using such commands:- Start cygwin terminal (click on icon);- startx & - opens xterm window;- slogin –X root@hostname

Or, better, allow XDM on SUSE Linux (open YaST2 -> system -> /etc/sysconfig editor, search for ‘DISPLAYMANAGER_REMOTE_ACCESS’,

Page 14: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 14 iSCSI and ASM shared media7

Set it to ‘yes’, Apply, then run from console: ‘init 3’, then ‘init 5’), then run X in query mode:

- X –query host-name

Important notices:- To change user when you have X11 access, use ‘sux’ instead of ‘su’. For

example: ‘sux – oracle’;- To login into remote host with X11 forwarding, run ‘slogin –X ….’.

You can use any other method to get X11 access (xhosts, etc…). Just be prepared – you will need X11 access. The best method is use Linux console in graphical mode.

4.5) Starting installation.

Login as root on both servers (open 2 xterm windows, 1 from first server, 1 from second). For example. I run ‘X –query testrac11’, login as myself, then:

- Click on terminal icon, and have terminal for the first server, run ‘sux –‘ to became root;

- Click on terminal icon, then run ‘slogin –X root@second-server’ and login onto second node (with X11 forwarding).

Now it’s time to begin using scripts. Copy them onto image server (you can use on of RAC nodes as such server), and mount it.

testrac12:~ # mkdir -p /image/UNIXtestrac12:~ # mkdir -p /image/REPtestrac12:~ # YaST2 nfs &testrac12:~ # dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda2 20971896 2481108 18490788 12% /tmpfs 2026756 16 2026740 1% /dev/shm/dev/sda3 10181952 32828 9631904 1% /u01image:/Repository 195358400 174076348 21282052 90% /image/REPimage:/UNIX 195358400 174076344 21282056 90% /image/UNIX

Now, chdir to installation directory and edit CONFIG.sh file.

testrac12:/image/UNIX/SuSe9 # cd /image/UNIX/SuSe9/SLES9-ORA10RAC-iSCSI-ASMtestrac12:/image/UNIX/SuSe9/SLES9-ORA10RAC-iSCSI-ASM # ls ... SLES9-ORA10RAC-iSCSI-ASM SLES9-x64-ORA10.tgz docs tests.. SLES9-i386-ORA10 SLES9-x64-ORA10RAC index.htmlRPMs SLES9-x64-ORA10 SLES9-x64-ORA10RAC.tgz iscsi-filestestrac12:/image/UNIX/SuSe9/SLES9-ORA10RAC-iSCSI-ASM # ln -s `pwd` /INSTtestrac12:/image/UNIX/SuSe9/SLES9-ORA10RAC-iSCSI-ASM # kwrite CONFIG.sh

Page 15: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 15 iSCSI and ASM shared media7

5. iSCSI and raw devices configuration preparing for RAC and ASM installation.Now we can start iSCSI and create all necessary volumes. We will use Automated Storage Management for Oracle data (including archived logs) and raw devices for Cluster Ready Services. To simplify these tasks, I used LVM2 volume manager and introduced modified ‘raw’ script (rawnames) which allow me to use symbolic names in Oracle and CRS instead of magical ‘raw1’, ‘raw2’ etc (which is extremely tricky when it’s all about dynamic iSCSI disks).

5.1) List of raw devices.

We must configure few raw devices for RAC CLUSTER:1) OCRFile – 100 Mb – for CRS server;2) CSSFile – 100 Mb – for CSS server (cache synchronization);3) iASM0 – 36 Gb – ASM storage for database; 4) iASM1 – 36 Gb – ASM storage for database,5) iBIGASM – 100 Gb – ASM storage for redo, archive logs and so on

(Only first 3 are required for minimal installation).

Page 16: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 16 iSCSI and ASM shared media7

5.2) Patching iscsi script.6

Only first 3 parts are required (you can use the one raw device fro ASM and place everything on it).

Script 001-patch-iscsi.sh apply patch to iscsi script and install rawnames script (script and patch are in FILES.d directory).

Script 001-patch-iscsi.sh:

#!/bin/sh## 001 script - patch iscsi #. CONFIG.shecho "Patching system iscsi script"patch -d /etc/init.d < FILES.d/iscsi.patch

Patch iscsi.patch:

*** /etc/init.d/iscsi~ Wed Dec 15 14:01:46 2004--- /etc/init.d/iscsi Mon Mar 21 22:42:52 2005****************** 26,31 ****--- 26,32 ---- test -d $BASEDIR || exit 0 + rc_reset for configfile in /etc/iscsi.conf /etc/initiatorname.iscsi; do if [ ! -f ${configfile} ]; then****************** 167,172 ****--- 168,177 ---- echo "$min $default $TCP_WINDOW_SIZE" > /proc/sys/net/ipv4/tcp_wmem fi fi+ # Take some time to adapt interfaces+ echo -n "Sleeping 30 seconds to allow port negotiations... "+ sleep 30+ # ping -c 2 10.254.32.106 || sleep 30 # start echo -n "Starting iSCSI: iscsi"****************** 219,224 ****--- 224,233 ---- touch /var/lock/subsys/iscsi fi + #+ # Now, sleep 20 seconds to allow iSCSI discover targents, and then run LVM again to red new volumes+ echo -n " sleeping 20 seconds... " && sleep 20+ /etc/init.d/boot.lvm $* # if we have an iSCSI fstab, process it

6 You can do everything without these patches, using standard ‘raw’ devices over iSCSI volumes identified by volume ID; but it will end up in useless ‘rawN’ names in Oracle and creates a good potential for human errors. So, I recommend installing ‘rawnames’ script.

Page 17: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 17 iSCSI and ASM shared media7

if [ -f /etc/fstab.iscsi ] ; then echo -n " fsck/mount "

Output:testrac12:/INST # sh 001*shPatching system iscsi scriptpatching file iscsitestrac12:/INST # ls /etc/init.d/rc5.d. K10sshd S01irq_balancer S12ldap.. K12nfs S01isdn S12rawnamesK07splash_late K12nfsboot S01random S12running-kernelK07xdm K14portmap S02coldplug S12sshdK08cron K14resmgr S05network S13kbdK08hwscan K14slpd S06syslog S13postfixK08nscd K14smbfs S08portmap S13powersavedK09postfix K14splash_early S08resmgr S13splashK09powersaved K16syslog S08slpd S14cronK09splash K17network S08smbfs S14hwscanK10alsasound K20coldplug S08splash_early S14nscdK10cups K21hotplug S10nfs S15splash_lateK10fbset K21irq_balancer S10nfsboot S15xdmK10ldap K21isdn S12alsasoundK10rawnames K21random S12cupsK10running-kernel S01hotplug S12fbset

Run it on BOTH nodes.

5.3) Running iscsi and creating volumes on iSCSI.

Now, you must configure and start iscsi on the servers. First, edit /etc/iscsi.conf file, adding target IP (and possible some other parameters, for redundancy for example). Start iscsi on node-1 first (do not start it until you created logical volumes). Here is my configuration:7

# iSCSI configuration file - see iscsi.conf(5)PortalFailover=yesPreferredSubnet=10.254.0.0/16#DiscoveryAddress=10.254.32.105 PreferredPortal=10.254.32.105DiscoveryAddress=10.254.32.106 PreferredPortal=10.254.32.106

rciscsi start…..testrac12:/INST # netstat -an | grep 3260

testrac12:~ # netstat -an | grep 3260

tcp 0 0 10.254.32.112:32811 10.254.32.106:3260 ESTABLISHED tcp 0 0 10.254.32.112:32778 10.254.32.106:3260 ESTABLISHED tcp 0 0 10.254.32.112:32812 10.254.32.105:3260 ESTABLISHED tcp 0 0 10.254.32.112:32794 10.254.32.105:3260 ESTABLISHED

7 Do not configure iscsi without security, if you have not fully isolated storage network (and it is better to do it even in such cases). My configuration is for test purpose only. Spend 10 minutes and configure iSCSI security.

Page 18: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 18 iSCSI and ASM shared media7

5.4) Creating LVM volumes.

Now it’s time to create raw devices. I used logical volumes for simplicity; because it allows me do not depend of discovery order or iSCSI. You can use partitions instead (it is not very difficult to configure).

File 002.create-lvmgroups.1sh:

#!/bin/sh#rciscsi start# This script is just reminder, what to doecho '********************************** start iSCSI on one node ONLY ** Call YaST2 ** create LVM partitions on 3 ** iSCSI disks ** Create 3 LVM groups: ** fas-1a-v0 33GB ** fas-1b-v0 33GB ** fas-1a-v1 101GB ** Create logical volumes ** fas-1a-v0/iASM0 33GB ** fas-1b-v0/iASM1 33GB ** fas-1a-v1/OCRFile 200Mb ** fas-1a-v1/CSSFile 200Mb ** fas-1a-v1/iBIGASM 100Gb ** Then, start iscsi on ** second node and verify that ** volumes are recognized ** ls -l /dev/mapper ** Then, run script 005 *********************************'echo -n 'Completed it?_'read x

Run on ONE node only (you can use script 002*.sh: sh 002*.sh

- start iscsi: rciscsi start- See iSCSI devices in YaST partitioning, and create LVM parttions on them:

Page 19: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 19 iSCSI and ASM shared media7

- Create LVM groups, 1 for every iSCSI lun, and create LVM volumes, as described above (do not format, and clear file system name when creating them), for example:

LVM volumes:

Page 20: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 20 iSCSI and ASM shared media7

fas-1a:LUN0: fas-1a-vol0Logical volume: iASM0, 33 Gb

fas-1a:LUN1: fas-1a-vol1Logical volumes:

OCRFile, 200MbCSSFile, 200MbiBIGASM, 100Gb

fas-1b:LUN3: fas-1v-vol0iASM1, 33Gb

Here is result:

5.5) Creating RAW devices.8

Now, verify list of logical devices created by LVM:testrac11:/INST # ls -l /dev/mappertotal 178drwxr-xr-x 2 root root 472 Mar 23 12:24 .drwxr-xr-x 41 root root 181480 Apr 5 19:23 ..lrwxrwxrwx 1 root root 16 Mar 23 12:24 control -> ../device-mapperbrw------- 1 root root 253, 4 Apr 5 20:39 fas--1a--v0-iASM0brw------- 1 root root 253, 7 Apr 5 20:39 fas--1a--v1-CSSFilebrw------- 1 root root 253, 6 Apr 5 20:39 fas--1a--v1-OCRFilebrw------- 1 root root 253, 8 Apr 5 20:39 fas--1a--v1-iBIGASMbrw------- 1 root root 253, 5 Apr 5 20:39 fas--1b--v0-iASM1

Notice that system duplicates ‘-‘ in /dev/mapper directory (in volume group names). Now, edit ‘rawnames.conf’ file (in installation scripts) to create raw devices for all these names:

8 Previous ‘rawnames’ script used ‘:’ as delimiter, but it have been changed to allow iSCSI disk names (with ‘:’ inside) to work properly.

Page 21: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 21 iSCSI and ASM shared media7

testrac12# vi FILES.d/rawnames.conf

# /etc/rawnames.conf## The format of this file is:# raw<N>,<blockdev>,symbolic-name[,owner]## example:# ---------# raw1,hdb1,localdisk## this means: bind /dev/raw/raw1 to /dev/hdb1 # and link to /dev/rawnames/localdisk## ...# This group if for iSCSI ASM installationraw1,fas-1a-v0/iASM0,iASM0raw2,fas-1a-v1/OCRFile,OCRFileraw3,fas-1a-v1/CSSFile,CSSFileraw4,fas-1a-v1/iBIGASM,iBIGASMraw5,fas-1b-v0/iASM1,iASM1

5.6) Installing rawnames script, turning on iSCSI, and testing how it works thru system reboot.

File 005-copy-rawnames.sh:

#!/bin/sh#cp FILES.d/rawnames /etc/init.d/rawnamesinsserv rawnames#echo 'Now verify rawnames.conf shared file'if [ -f /etc/rawnames.conf ]then echo You have already /etc/rawnames.conf installed echo Here is difference with shared file in scripts diff /etc/rawnames.conf FILES.d/rawnames.conf echo -n 'You can ^C to abort or press Enter to continue_' read xfiecho vi FILES.d/rawnames.confvi FILES.d/rawnames.conf#echo We are about to copy rawnames.conf into system and activate iscsiecho -n Press ENTER to continue_read xcp FILES.d/rawnames.conf /etc/rawnames.confinsserv iscsirciscsi start## repeat these 2 servcies here, because we added something between#/etc/init.d/boot.lvm start/etc/init.d/rawnames startls -l /dev/rawnames

Run on both nodes:sh 005*sh

5.7) Now reboot to verify that everything will come well.

Page 22: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 22 iSCSI and ASM shared media7

reboot

and when systems come back, verify that it started iSCSI, found LVM volumes and created ‘rawnames’ devices.

testrac11:/INST # lvscan ACTIVE '/dev/fas-1b-v0/iASM1' [35.90 GB] next free (default) ACTIVE '/dev/fas-1a-v1/OCRFile' [200.00 MB] next free (default) ACTIVE '/dev/fas-1a-v1/CSSFile' [200.00 MB] next free (default) ACTIVE '/dev/fas-1a-v1/iBIGASM' [100.00 GB] next free (default) ACTIVE '/dev/fas-1a-v0/iASM0' [35.90 GB] next free (default) ACTIVE '/dev/asm/asms' [39.00 GB] next free (default) ACTIVE '/dev/fibre0/asmf' [67.30 GB] next free (default) ACTIVE '/dev/test1/test11' [20.00 GB] next free (default) ACTIVE '/dev/test1/test12' [20.00 GB] next free (default)testrac11:/INST # ls /dev/rawnames. .. CSSFile OCRFile fcFCdisk fcSATAdisk iASM0 iASM1 iBIGASM

CONGRATULATIONS. You can proceed with real ORACLE installation.

6. Prepare CRS installation.Now we begin script based installation. We will run scripts one by one, and comment their action. You can see scripts sources and run scripts or reproduce the same manually..

6.1) Login to both nodes.

Became ROOT. (use ‘sux –‘ if you login as another user).

6.2) Verify CONFIG.sh once again.

This file contains configuration for our installation scripts. It is in reality standard shell profile. I marked by bold lines, which are very likely to be changed.

6.3) 010-install-orarun.sh - installs orarun, unlock oracle user and set up password.

On both nodes, run script 010 – ‘sh 010*.sh’. Enter new password for user 'oracle'.

File 010-install-orarun.sh:

#!/bin/bash#       Install orarun package#. CONFIG.shrpm -U $RPMORARUNpasswd oraclechsh -s /bin/bash oracle

Page 23: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 23 iSCSI and ASM shared media7

Script:- Install orarun RPM;- Unlocks ‘oracle’ user by setting password and changing shell to executable

one.

Result:testrac12:/INST # sh 010*Updating etc/sysconfig/oracle...Changing password for oracle.New password:Bad password: it is based on a dictionary wordRe-enter new password:Password changedChanging login shell for oracle.Shell changed.

6.4) 015-edit-oraprofile.sh - copy our version of oracle profile and edit it (set up SID).

On both nodes, run script 15: sh 015*sh

File 015-edit-oraprofile.sh:

##       Edit /etc/profile.d/oracle.sh and oracle.sh#. CONFIG.shtest -f /etc/profile.d/oracle.sh-ORIG && cp /etc/profile.d/oracle.sh /etc/profile.d/oracle.sh-ORIGsed "s+ORACLE_SID=test1+ORACLE_SID=$SID+" FILES.d/oracle.sh > /tmp/oracle.sh && cp /tmp/oracle.sh /etc/profile.d/oracle.shchmod a+x /etc/profile.d/oracle.shchown oracle /etc/profile.d/oracle.shcd /etc/profile.dif [ ! -f ~oracle/.profile ]then     cd ~oracle || exit 1     cp -r  /etc/skel/.??* ~oracle     touch .profile     find . -print | xargs chown oraclefi

Script:- Make reserved copy of orarun profile file;- Install our version of this file (see appendix);- Set up ORACLE_SID variable (notice, that it is different on different nodes);- Make this script editable for oracle user (not required but simplify many

things);- Copy standard files into ~oracle (orarun creates user with empty directory);- Change owner for these files.

Results (SIDs must be different):

testrac12:/INST # sh 015*shtestrac12:/INST # grep SID /etc/profile.d/oracle.sh  ORACLE_SID=test2

Page 24: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 24 iSCSI and ASM shared media7

  export ORACLE_BASE ORACLE_HOME ORA_NLS33 ORACLE_SID PATH LD_LIBRARY_PATH CLASSPATH TNS_ADMIN

6.5) 018-edit-rcoracle.sh - copy our version of oracle startup files for Oracle10 cluster.

On both nodes, run script 018.

File 018-edit-rcoracle.sh:

#!/bin/bash##       Edit /etc/init.d/oracle#. CONFIG.shif [ ! -d /etc/init.d/ORIG ]then    mkdir -p /etc/init.d/ORIG    cp /etc/init.d/oracle /etc/init.d/ORIG/.fiif [ ! -d /etc/sysconfig/ORIG ]then    mkdir -p /etc/sysconfig/ORIG    cp /etc/sysconfig/oracle /etc/sysconfig/ORIG/.fi#cp FILES.d/init.oracle /etc/init.d/oraclecp FILES.d/sysconfig.oracle /etc/sysconfig/oracle

Script:- Makes reserved copy of init script and sysconfig file;- Install our version of these files (see appendix);

Results:testrac11:/INST # sh 018*shtestrac11:/INST #

6.6) 020-start-rcoracle.sh - start rcoracle

(We want to set up system variables, because nothing is installed yet).On both nodes, run script 020:

File 020-start-rcoracle.sh:

#!/bin/bash##       Run rcoracle to set up kernel parameters#. CONFIG.shrcoracle start

Result:

Page 25: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 25 iSCSI and ASM shared media7

testrac11:/INST # sh 020*sh

##############################################################################                 Begin of   O R A C L E   startup section                  ##############################################################################

ORACLE_HOME directory /opt/oracle/product/10.1.0/db_1 does not exist!Unsetting ORACLE_HOME, will try to determine it from system...ORACLE_HOME environment variable not set.Check /etc/profile.d/oracle.sh and /etc/oratabCannot find ORACLE_HOME directory .Environment settings are wrong! Check /etc/profile.d/oracle.sh SETTINGS start from /etc/sysconfig/oracle - Set Kernel Parameters for Oracle:   yes - Start Oracle OCFS:                  yes - Start Oracle CRS:                   yes - Start Oracle EM:                    yesCan't find needed file: emctl - Setting START_ORACLE_DB_EM = noCan't find needed file: init.crs - Setting START_ORACLE_DB_CRS = noCan't find needed file: /sbin/load_ocfs - Setting START_ORACLE_DB_OCFS = no

Setting kernel parameters for Oracle, see file/etc/sysconfig/oracle for explanations.

Shared memory:      SHMMAX=3294967296  SHMMNI=4096  SHMALL=2097152Semaphore values:   SEMMSL=1250  SEMMNS=32000  SEMOPM=100  SEMMNI=256Other values:       FILE_MAX_KERNEL=131072  IP_LOCAL_PORT_RANGE=1024 65000

ULIMIT values:      MAX_CORE_FILE_SIZE_SHELL=unlimited                    FILE_MAX_SHELL=65536  PROCESSES_MAX_SHELL=16384

Kernel parameters set for Oracle:                               done

  - Starting Oracle Cluster Filesystem...                       skipped  - Starting Oracle CRS...                                      skipped

  - Starting Oracle EM dbconsole...                             skipped

##############################################################################                      End of   O R A C L E   section                       ##############################################################################

testrac11:/INST #Script runs rcoracle to set up kernel variable. Do not surprise, if it complain about absent applications – you did not install anything yet.

6.7) 030-check-oracle-user.sh - this script verify Oracle user and location of Oracle installation disk.

Run it on all nodes.

File 030-check-oracle-user.sh:

Page 26: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 26 iSCSI and ASM shared media7

#!/bin/bash# Check oracle user setting#. CONFIG.shif [ ! -x $ORA10CD1/runInstaller ]then echo -e "\33[31m *** Wrong oracle disk location - no file $ORA10CD1/runInstaller found \33[30m" exit 1fiif [ ! -x $CRS10CD1/runInstaller ]then echo -e "\33[31m *** Wrong CRS disk location - no file $CRS10CD1/runInstaller found \33[30m" exit 1fi

echo "*** CHECK ORACLE VARIABLES *** "sux - -c "env" oracle | grep ORACLE

Results:testrac11:/INST # sh 030*sh*** CHECK ORACLE VARIABLES ***xauth:  creating new authority file /opt/oracle/.XauthorityORACLE_SID=test1ORACLE_BASE=/opt/oracleORACLE_HOME=/opt/oracle/product/10.1.0/db_1

ORACLE_SID must be different on different nodes.(Here we run NFS mounting script, when installed OracleRAC on NetApp NFS. We skip it in case of ASM.)

6.8) 060-setup-xntp.sh - set up XNTP service (important to synchronize clock on all nodes).

It starts yast2 ntp-client. Please, configure xntp and ensure that it is started in boot time.

File 060-setup-xntp.sh:

##       Now we use this script only to start xntpd#. CONFIG.shyast2 xntp-client /etc/init.d/xntpd status

Just setting XNTP server. It is important to have time synchronized on all RAC nodes.

Page 27: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 27 iSCSI and ASM shared media7

6.9) 070-edit-services.sh - script removes conflicting records from /etc/services

(I am not sure if it is still necessary. Anyway, we will run it. Be careful, if you use NIS or LDAP, update them after running script and verify, that they do not keep removed records.).Run on all nodes.

File 070-edit-services.sh:

#. CONFIG.shsed '/net8-cman/d' /etc/services > /etc/services- && cp /etc/services- /etc/services && rm /etc/services-ls -l /etc/services

Script removes net8-cman records from /etc/services.

Results:        testrac11:/INST # sh 070*sh        -rw-r--r--  1 root root 596317 Feb 28 22:17 /etc/services

6.10) 080-ssh-genkeys.sh - set up password-less access between nodes.

Oracle RAC installer require, that user oracle can run ‘ssh’ between RAC nodes without password and without any warnings, prompts and additional output. This sript

Page 28: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 28 iSCSI and ASM shared media7

set up such access, using RCA keys.Do not use this script if you expose your Oracle system to public internet.

File 080-ssh-genkeys.sh:

##       Generate ssh rsa and dsa keys for password-less access#       You may want to remove this after you wil complete oracle installation#       Or use ssh-agent and passphrase for better security##. CONFIG.sh## Check DNS or HOSTS first#for i in $NODES $NODESPdo    echo testing $i    if ! ping -q -c 1 $i    then        echo "*** BAD NODE NAME $i, fix DNS, hosts or NODES list"        exit 1    fidonesu - oracle -c "ssh-keygen -t rsa -b 2048"su - oracle -c "ssh-keygen -t dsa -b 2048"##for i in $NODESdoecho Copying keys to node $icat ~oracle/.ssh/*pub | ssh $i 'su - oracle -c "mkdir -p .ssh;touch .ssh/authorized_keys; cat >> .ssh/authorized_keys"'su - oracle -c "ssh $i date"echo donedone

Run script on both nodes (first on node1, then on node2), enter EMPTY passphrase (4 times), enter root password (2 times), and answer 'yes' on requests.Example (I marked answers as <answer> - you will need to enter root password few times. Do not enter passphrase, keep it empty):

testrac11:/INST # sh 080*shtesting testrac11PING testrac11 (10.25.32.111) 56(84) bytes of data.

--- testrac11 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 mstesting testrac12PING testrac12 (10.25.32.112) 56(84) bytes of data.

--- testrac12 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 mstesting testrac11-1PING testrac11-1 (10.253.32.111) 56(84) bytes of data.

--- testrac11-1 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms

Page 29: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 29 iSCSI and ASM shared media7

testing testrac12-1PING testrac12-1 (10.253.32.112) 56(84) bytes of data.

--- testrac12-1 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 1.208/1.208/1.208/0.000 msGenerating public/private rsa key pair.Enter file in which to save the key (/opt/oracle/.ssh/id_rsa): <ENTER>Created directory '/opt/oracle/.ssh'.Enter passphrase (empty for no passphrase): <ENTER>Enter same passphrase again: <ENTER>Your identification has been saved in /opt/oracle/.ssh/id_rsa.Your public key has been saved in /opt/oracle/.ssh/id_rsa.pub.The key fingerprint is:ad:db:92:f3:40:26:d6:d5:6d:95:89:b8:03:64:b8:7d oracle@testrac11Generating public/private dsa key pair.Enter file in which to save the key (/opt/oracle/.ssh/id_dsa): <ENTER>Enter passphrase (empty for no passphrase): <ENTER>Enter same passphrase again: <ENTER>Your identification has been saved in /opt/oracle/.ssh/id_dsa.Your public key has been saved in /opt/oracle/.ssh/id_dsa.pub.The key fingerprint is:70:87:ea:cd:1c:12:c6:f1:fd:5e:49:ab:9d:fe:a8:5d oracle@testrac11Copying keys to node testrac11Password: <ROOT PASSWORD>The authenticity of host 'testrac11 (10.25.32.111)' can't be established.RSA key fingerprint is fe:87:5c:2c:6f:fd:d9:0b:e5:19:dd:f3:f9:72:64:7e.Are you sure you want to continue connecting (yes/no)? yes <yes>Warning: Permanently added 'testrac11,10.25.32.111' (RSA) to the list of known hosts.Mon Feb 28 22:21:28 PST 2005doneCopying keys to node testrac12Password: <ROOT PASSWORD>The authenticity of host 'testrac12 (10.25.32.112)' can't be established.RSA key fingerprint is e2:8b:a8:a7:13:43:4b:66:b4:b5:9f:26:49:e7:06:d4.Are you sure you want to continue connecting (yes/no)? yes <yes>Warning: Permanently added 'testrac12,10.25.32.112' (RSA) to the list of known hosts.Mon Feb 28 22:21:35 PST 2005donetestrac11:/INST #ATTENTION. Script can fail, if root or oracle user have old .ssh directory with wrong keys (for example, if you did another installation and did not reinstall everything). If it fail, you can try to remove these keys and repeat script again:

testrac11# rm -rf ~root/.ssh ~oracle/.sshtestrac11# sh 080*sh

6.11) 090-verify-ssh-access.sh - verify results (password-less access between nodes).

Run these script on all nodes: sh 090-verify-ssh-access.sh. It is IMPORTANT script - it sets up identity on the very first call of ssh, so do not skip it.

File 090-verify-ssh-access.sh:

## Now, verify oracle password-access between nodes##

Page 30: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 30 iSCSI and ASM shared media7

#. CONFIG.sh#for i in $NODES $NODESPdo    echo -e testing $i ", \033[31m Answer yes very first time, please \033[m"    su - oracle -c "ssh -n $i echo TEST"    lines=`su - oracle -c "ssh -n $i echo TEST" | wc -l`    if [ x$lines != x1 ]    then        echo ATTENTION. Something wrong with oracle 'ssh $i' , check it manually    else        echo $i OK    fidone

Output:testrac11:/INST # sh 090*shtesting testrac11 ,  Answer yes very first time, pleaseTESTtestrac11 OKtesting testrac12 ,  Answer yes very first time, pleaseTESTtestrac12 OKtesting testrac11-1 ,  Answer yes very first time, pleaseThe authenticity of host 'testrac11-1 (10.253.32.111)' can't be established.RSA key fingerprint is fe:87:5c:2c:6f:fd:d9:0b:e5:19:dd:f3:f9:72:64:7e.Are you sure you want to continue connecting (yes/no)? yes

<yes>Warning: Permanently added 'testrac11-1,10.253.32.111' (RSA) to the list of known hosts.TESTtestrac11-1 OKtesting testrac12-1 ,  Answer yes very first time, pleaseThe authenticity of host 'testrac12-1 (10.253.32.112)' can't be established.RSA key fingerprint is e2:8b:a8:a7:13:43:4b:66:b4:b5:9f:26:49:e7:06:d4.Are you sure you want to continue connecting (yes/no)? yes

<yes>Warning: Permanently added 'testrac12-1,10.253.32.112' (RSA) to the list of known hosts.TESTtestrac12-1 OK

6.12) Edit and verify /etc/hosts:

Edit this file, if necessary, to make it EXACTLY as in our example (replace hostnames and domain). System installation creates (usually) wrong records for private interface.

Wrong /etc/hosts (for example, using full domain name in first field) is most common source of RAC installation problems. So, double check and verify that you have perfect host names.

Example:

testrac11# vi /etc/hoststestrac11# cat /etc/hosts10.23.32.111 testrac11 testrac11.mydomain.com 10.254.32.111 testrac11-1 testrac11-1.mydomain.com

10.23.32.112 testrac12 testrac12.mydomain.com

Page 31: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 31 iSCSI and ASM shared media7

10.254.32.112 testrac12-1 testrac12-1.mydomain.com

10.23.32.113 testrac11-vip testrac11-vip.mydomain.com10.23.32.114 testrac12-vip testrac12-vip.mydomain.com

7. Install Cluster Ready Services.

7.1) 100-verify-rawfiles.sh – verify that oracle have access to shared files/

Run script 100 on all nodes, and CHECK OUTPUT carefully.

File 100-verify-rawfiles.sh:

## 100) Verify that we have access to OCRFile and CSSFile# . CONFIG.sh . /etc/profile.d/oracle.sh # IMPORTANT. LD_ASSUME_KERNEL must be 2.4.5 here if [ "$LD_ASSUME_KERNEL" = "" ] then echo -e "\033[31mATTENTION. LD_ASSUME_KERNEL variable absent, set it up first\033[m" exit 1 fi # ls -lL $OCRFile $CSSFile if su - oracle -c "test -c $OCRFile -a -w $OCRFile" then echo $OCRFile OK, continue... else echo "Oracle can not write OCRFile=$OCRFile or file is not raw device. Aborted" echo "CORRECT ERROR BEFORE ANY ACTIONS" exit 1 fi if su - oracle -c "test -c $CSSFile -a -w $CSSFile" then echo $CSSFile OK, continue... else echo "Oracle can not write CSSFile=$CSSFile or file is not raw device. Aborted" echo "CORRECT ERROR BEFORE ANY ACTIONS" exit 1 fi echo "OK, you can continue with CRS installation"

testrac12:/INST # sh 100*shcrw-rw---- 1 root disk 162, 3 Jun 30 2004 /dev/rawnames/CSSFilecrw-rw---- 1 root disk 162, 2 Jun 30 2004 /dev/rawnames/OCRFile/dev/rawnames/OCRFile OK, continue.../dev/rawnames/CSSFile OK, continue...OK, you can continue with CRS installationtestrac12:/INST #

Script verifies, that Oracle user have write access to /dev/rawnames/OCRFile and /dev/rawnames/CSSFile (names are configured in CONFIG.sh) and that these files are raw devices. If it fail, you can not install CRS and must fix problem (most likely,

Page 32: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 32 iSCSI and ASM shared media7

access permissions and iSCSI not running) first. Oracle must be a member of ‘disks’ group to have write access into these files.

7.2) 200-InstCRS.1sh - installing Cluster Ready Services. It is first real INSTALLATION step.

I use this script for CRS installation. Remember - when oracle universal installer (OUI) asks you to run root.sh files, you better use our scripts (210* and 220*) instead - they fix some access permissions problems, which exists in Oracle installer.

Here it is. Run it on node-1 only.

File 200-InstCRS.1sh:

## 200) Install Oracle cluster manager# Enter /u02/OCRFile and /u02/VCSFile as shared files . CONFIG.sh . /etc/profile.d/oracle.sh # IMPORTANT. LD_ASSUME_KERNEL must be 2.4.5 here if [ "$LD_ASSUME_KERNEL" = "" ] then echo -e "\033[31mATTENTION. LD_ASSUME_KERNEL variable absent, set it up first\033[m" exit 1 fi #

CRS=$CRS10CD1 if [ ! -x $CRS/runInstaller ] then echo "Can not find CRS installation disk: $CRS10CD1" echo "verify variable CRS10CD1 in CONFIG.sh and location of CRS installation files" exit 1 fi

sux - -c "mkdir -p $ORACLE_HOME;ORACLE_HOME=$ORACLE_BASE/product/10.1.0/crs_1;mkdir -p $ORACLE_HOME;export ORACLE_HOME; echo $ORACLE+HOME;cd $CRS; ./runInstaller" oraclesleep 5 echo -e "\033[32mWait for runInstaller. ================================

Registry file - $OCRFile Voting file - $CSSFile

================================= \033[m "

Script:- Change user to oracle;- Change ORACLE_HOME to CRS home (should be different from Database

home);- Verify, that $OCRFile and $CSSFile (from CONFIG.sh) do exists, are raw

devices, and are writable by ‘oracle’ (oracle must be a member of group ‘disks’);

Page 33: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 33 iSCSI and ASM shared media7

- Start Oracle installer for CRS and reminds user to expected names for CRS control files.

Oracle installer works in background so scripts returns short before you will see Oracle Universal Installer window.

Output:testrac11:/INST # !shsh 200*shcrw-rw---- 1 root disk 162, 3 Mar 9 19:54 /dev/rawnames/CSSFilecrw-rw---- 1 root disk 162, 2 Mar 16 18:41 /dev/rawnames/OCRFile/dev/rawnames/OCRFile OK, continue.../dev/rawnames/CSSFile OK, continue...+HOMEStarting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be UnitedLinux-1.0, redhat-3 or SuSE-9 Passed

All installer requirements met.

Checking Temp space: must be greater than 80 MB. Actual 27914 MB PassedChecking swap space: must be greater than 150 MB. Actual 8193MB PassedChecking monitor: must be configured to display at least 256 colors. Actual 16777216 PassedPreparing to launch Oracle Universal Installer from /tmp/OraInstall2005-04-06_02-25-50PM. Please wait ...Oracle Universal Installer, Version 10.1.0.3.0 ProductionCopyright (C) 1999, 2004, Oracle. All rights reserved.

Wait for runInstaller. ================================

Registry file - /dev/rawnames/OCRFile Voting file - /dev/rawnames/CSSFile

=================================

testrac11:/INST #

It starts OUI in background. Select Next, Next. Now Oracle will ask you to run first root script:

Page 34: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 34 iSCSI and ASM shared media7

Run script 210*sh

File 210-runInventory.sh

## Run it on every node. CONFIG.sh. /etc/profile.d/oracle.shsh $ORACLE_BASE/oraInventory/orainstRoot.sh

Results:testrac11:/INST # sh 210*shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing groupname of /opt/oracle/oraInventory to oinstall.testrac11:/INST #

Notice. If script shows error message, then (most likely) something wrong with $ORACLE_HOME for CRS – Oracle installer and our scripts have different setting.

Click Continue’ when script is completed. You run this particular script on first node, for now.

Confirm few selections, and you will see CRS installation screen:

Page 35: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 35 iSCSI and ASM shared media7

Continue with Next.Fill in public and private names for all nodes:

Page 36: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 36 iSCSI and ASM shared media7

On next screen, specify eth0 as public and eth1 as private interfaces:

Page 37: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 37 iSCSI and ASM shared media7

(Click on empty field to see set of possible selections).9

Specify OCRFile location:

Specify CSSFile location:

9 You must use THE SAME interface names for private and public interfaces on all nodes – you can not, for example, use eth0 as public on testrac11 and eth1 as public on testrac12.

Page 38: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 38 iSCSI and ASM shared media7

Now Oracle will ask you to run orainstRoot.sh script on ALL nodes. Run script 210*.sh on first and second node:

File 210-runInventory.sh:

## Run it on every node. CONFIG.sh. /etc/profile.d/oracle.shsh $ORACLE_BASE/oraInventory/orainstRoot.sh

Output:testrac11:/INST # sh 210*shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing groupname of /opt/oracle/oraInventory to oinstall.testrac11:/INST #

and

testrac12:/INST # sh 210*shCreating the Oracle inventory pointer file (/etc/oraInst.loc)Changing groupname of /opt/oracle/oraInventory to oinstall.testrac12:/INST #

Click Continue, check installation list, and Oracle will start to install CRS:

Page 39: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 39 iSCSI and ASM shared media7

Now, Oracle requests you to run root.sh script on ALL nodes. It is key step of CRS installation, because this script configure and start CRS, and if something is wrong (host name, disk sharing, etc) CRS will not come.

7.3) Running root.sh scripts (by 220-runRootsh.sh) on all nodes:

This is very important step (you can repeat it in future, if you lost your CRS shared files), it creates shared files, configure CRS and add it into /etc/inittab so that system now starts CRS automatically. You can use /etc/init.d/init.crs script to start / stop CRS, when it all is completed (rcoracle do the same).

Run our script 220-runRootsh.sh first on node-1, then on node-2:

File 220-runRootsh.sh:

## Run this on every node#. CONFIG.sh#mkdir -p /opt/oracle/product/10.1.0/crs_1/log && chown oracle:dba /opt/oracle/product/10.1.0/crs_1/logsh /opt/oracle/product/10.1.0/crs_1/root.sh/opt/oracle/product/10.1.0/crs_1/bin/olsnodes -n/opt/oracle/product/10.1.0/crs_1/bin/crs_statecho 'export PATH=$PATH:/opt/oracle/product/10.1.0/crs_1/bin' >>

Page 40: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 40 iSCSI and ASM shared media7

~root/.bashrc

Results:testrac11:/INST # sh 220*shRunning Oracle10 root.sh script...\nThe following environment variables are set as:    ORACLE_OWNER= oracle    ORACLE_HOME=  /opt/oracle/product/10.1.0/crs_1Finished running generic part of root.sh script.Now product-specific root actions will be performed.Checking to see if Oracle CRS stack is already up.../etc/oracle does not exist. Creating it now.Setting the permissions on OCR backup directoryOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/opt/oracle/product/10.1.0' is not owned by rootWARNING: directory '/opt/oracle/product' is not owned by rootWARNING: directory '/opt/oracle' is not owned by rootassigning default hostname testrac11 for node 1.assigning default hostname testrac12 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: testrac11 testrac11-1 testrac11node 2: testrac12 testrac12-1 testrac12Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Now formatting voting device: /volcrs/CRS/CSSFileRead -1 bytes of 512 at offset 872603648 in voting device (CSSFile)Successful in setting block0 for voting disk.Format complete.Adding daemons to inittabPreparing Oracle Cluster Ready Services (CRS):Expecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes.        testrac11CSS is inactive on these nodes.        testrac12Local node checking complete.Run root.sh on remaining nodes to start CRS daemons.testrac11       1testrac12       2

AND on second node:

testrac12:/INST # sh 220*shRunning Oracle10 root.sh script...\nThe following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/oracle/product/10.1.0/crs_1Finished running generic part of root.sh script.Now product-specific root actions will be performed.Checking to see if Oracle CRS stack is already up...Setting the permissions on OCR backup directoryOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/opt/oracle/product/10.1.0' is not owned by rootWARNING: directory '/opt/oracle/product' is not owned by rootWARNING: directory '/opt/oracle' is not owned by rootclscfg: EXISTING configuration version 2 detected.clscfg: version 2 is 10G Release 1.assigning default hostname testrac11 for node 1.assigning default hostname testrac12 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: testrac11 testrac11-1 testrac11node 2: testrac12 testrac12-1 testrac12

Page 41: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 41 iSCSI and ASM shared media7

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedAdding daemons to inittabPreparing Oracle Cluster Ready Services (CRS):Expecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. testrac11 testrac12CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)testrac11 1testrac12 2CRS-0202: No resources are registered.

You can click Continue (after seen last message).And see report about successful installation:

7.4) Running 230-check-crs.sh - verify crs.

Now, verify that CRS is really started on ALL nodes. Use script 230:

File 230-check-crs.sh:

Page 42: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 42 iSCSI and ASM shared media7

##       230)    Now, check if crs works#. ~oracle/.profile. ~/.bashrcrm /var/log/crsln -s ~oracle/product/*/crs_1/crs/log/ /var/log/crs

crsctl check installtail /var/log/crs/*

Script:- Creates shortcut to CRS logs - /var/log/crs, so that you can always watch CRS

logs by simple typing tail –f /var/log/crs;- Show you recent CRS logs.

testrac11:/INST # sh 230*shrm: cannot remove `/var/log/crs': No such file or directoryExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. testrac11 testrac12CSS is active on all nodes.2005-04-06 16:18:08.532: CRSD locked during state recovery, please wait.2005-04-06 16:18:08.534: CRSD recovered, unlocked.2005-04-06 16:18:08.535: QS socket on: (ADDRESS=(PROTOCOL=ipc)(KEY=ora_crsqs))2005-04-06 16:18:08.536: UI socket on: (ADDRESS=(PROTOCOL=ipc)(KEY=testrac11_testrac_caa))2005-04-06 16:18:08.541: E2E socket on: (ADDRESS=(PROTOCOL=tcp)(HOST=testrac11-1)(PORT=49896))2005-04-06 16:18:08.541: Starting Threads2005-04-06 16:18:08.541: CRS Daemon Started.2005-04-06 15:55:50.374: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-06 15:56:00.632: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-06 16:18:09.247: CRSD-1: Complete Restart Application Request

AND, on node 2:

testrac11:/INST # sh 230*shrm: cannot remove `/var/log/crs': No such file or directoryExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. testrac11 testrac12CSS is active on all nodes.2005-04-06 16:18:08.532: CRSD locked during state recovery, please wait.2005-04-06 16:18:08.534: CRSD recovered, unlocked.2005-04-06 16:18:08.535: QS socket on: (ADDRESS=(PROTOCOL=ipc)(KEY=ora_crsqs))2005-04-06 16:18:08.536: UI socket on: (ADDRESS=(PROTOCOL=ipc)(KEY=testrac11_testrac_caa))2005-04-06 16:18:08.541: E2E socket on: (ADDRESS=(PROTOCOL=tcp)(HOST=testrac11-1)(PORT=49896))2005-04-06 16:18:08.541: Starting Threads2005-04-06 16:18:08.541: CRS Daemon Started.2005-04-06 15:55:50.374: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-06 15:56:00.632: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-06 16:18:09.247: CRSD-1: Complete Restart Application Request

Page 43: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 43 iSCSI and ASM shared media7

8. Database server installation.Now you have Cluster Ready Services installed and configured.It is time to install Oracle Database.

8.1) 300-check-env.sh- checks environment for Oracle database.

File 300-check-env.sh:

## 300) check oracle env#echo " "echo " Please, run me on all nodes and be sure that ORACLE_BASE is correct and ORACLE_SID is different\33[m"echo " "su - -c "set | grep ORACLE" oracle

Script is simple, no comments.Results:

testrac11:/INST # sh 300*sh Please, run me on all nodes and be sure that ORACLE_BASE is correct and ORACLE_SID is different ORACLE_BASE=/opt/oracleORACLE_HOME=/opt/oracle/product/10.1.0/db_1ORACLE_OWNER=oracleORACLE_SID=test1SET_ORACLE_KERNEL_PARAMETERS=yesSTART_ORACLE_DB_CRS=yesSTART_ORACLE_DB_EM=yesSTART_ORACLE_DB_OCFS=yes

and

testrac12:/INST # sh 300*sh

 Please, run me on all nodes and be sure that ORACLE_BASE is correct and ORACLE_SID is different\33[m

ORACLE_BASE=/opt/oracleORACLE_HOME=/opt/oracle/product/10.1.0/db_1ORACLE_OWNER=oracleORACLE_SID=test2SET_ORACLE_KERNEL_PARAMETERS=yesSTART_ORACLE_DB_CRS=yesSTART_ORACLE_DB_EM=yesSTART_ORACLE_DB_OCFS=yes

8.2) 310-InstOracle.1sh - starts Oracle Installer for the main database.

Run this script on node1 only.

File 310-InstOracle.1sh:

Page 44: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 44 iSCSI and ASM shared media7

##       310) Install Oracle cluster manager#       Enter /u02/OCRFile and /u02/VCSFile as shared files. CONFIG.shD=$VOLDBDB=$ORA10CD1. /etc/profile.d/oracle.sh. ~oracle/.profile

  sux - -c "cd $DB; ./runInstaller" oracle  wait  sleep 4  echo -e "\033[32m  "  echo "Install ORACLE database Enterprise Edition"  echo "Do not create database, we wil use dbca for this purpose"  echo "Run 320-run-root-in-X-env.sh file when prompted. Be sure that you have X allowed"  echo "from ALL Nodes"  echo -e "\033[m"

It starts Oracle Universal Installer for Oracle Database.

testrac11:/INST # sh 310*shStarting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, SuSE-8 or UnitedLinux-1.0 Passed

All installer requirements met.

Checking Temp space: must be greater than 80 MB. Actual 27471 MB PassedChecking swap space: must be greater than 150 MB. Actual 8193MB PassedChecking monitor: must be configured to display at least 256 colors. Actual 16777216 PassedPreparing to launch Oracle Universal Installer from /tmp/OraInstall2005-04-06_04-54-40PM. Please wait ...Oracle Universal Installer, Version 10.1.0.3.0 ProductionCopyright (C) 1999, 2004, Oracle. All rights reserved.

Install ORACLE database Enterprise EditionDo not create database, we wil use dbca for this purposeRun 320-run-root-in-X-env.sh file when prompted. Be sure that you have X allowedfrom ALL Nodes

It starts Oracle Universal Installer:- Click Next 2 times:

Page 45: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 45 iSCSI and ASM shared media7

- Select both nodes for installation:

- Select Enterprise Edition;- Select Do not create database;- Keep DBA GROUP as is (disk);- Click INSTALL.

After Oracle is installed, you wil be prompted to run root.sh script on all nodes:

8.3) 320-RunRootSh.sh – running root.sh scripts. Patching oracle for x86_64.

Run 320*sh script (or you can run root.sh directly) on ALL nodes.

Page 46: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 46 iSCSI and ASM shared media7

ATTENTION. If you are installing Oracle 10.1.0.3g RAC on x86_64 platform, you can run into Oracle bug # 4045013. In this case, your root.sh script will not lunch ‘vinca’ GUI application, and you will see ‘java’ process looped in ‘top’. You can check bug by running

srvctl status database –d test

it never ends if this bug have place.

If it happen, kill root.sh (^C), click OK and exit from installer (YOU MUST EXIT BEFORE ANY OTHER ACTIONS!). Open metalink.oracle.com, find bug # 4045013, and if it is applicable to you, request interim patch # 4045013 from Oracle. Unzip patch, set up it’s location in CONFIG.sh file, and run script 315*sh, which automate patch installation, ON ALL NODES, one after another (you can apply patch manually, of course):

sh 315*shTo verify a fix, run again

srvctl status database –d test

You will see error message (because database do not exists yet).

Now, run script 320 AGAIN (or run root.sh manually) on ALL nodes.

This bug should be fixed in Oracle 10.1.0.4 and Oracle 10.2g.

Other possible (on x86_64 platform) problem is ldap authentication – do not use LDAP or NIS or other non-local authentications for RAC cluster. Check ‘/etc/nsswitch.conf’ file if your CRS cannot start services (with CRS-0215 message, for example), and remove all LDAP and NIS from it.

File 320-RunRootSh.sh:

## Run this on every node in X11 mode#. CONFIG.sh. /etc/profile.d/oracle.shsh $ORACLE_HOME/root.shecho "DO not forget to run it on ALL Nodes one by one"

Output:testrac12:/INST # sh 320*shRunning Oracle10 root.sh script...\nThe following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /opt/oracle/product/10.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/bin]: Copying dbhome to /usr/bin ... Copying oraenv to /usr/bin ... Copying coraenv to /usr/bin ...

\nCreating /etc/oratab file...Adding entry to /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.

Page 47: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 47 iSCSI and ASM shared media7

Script starts vipca (Virtual IP Configuration Assistant). First, select only public interface:

Second, configure Virtual IP addresses, which you assigned to the servers:

(click Enter after you changed any field).After NEXT, check setting and click FINISH. vipca will configure node applications:

Page 48: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 48 iSCSI and ASM shared media7

Check results and exit. Now you have node applications configured for auto start by CRS manager. You can see their status using crs_stat –t (one of my scripts added /opt/oracle/product/10.1.0/crs_1/bin into root’s PATH):

testrac11:/INST # crs_stat -tName Type Target State Host ------------------------------------------------------------ora....c11.gsd application ONLINE ONLINE testrac11 ora....c11.ons application ONLINE ONLINE testrac11 ora....c11.vip application ONLINE ONLINE testrac11 ora....c12.gsd application ONLINE ONLINE testrac12 ora....c12.ons application ONLINE ONLINE testrac12 ora....c12.vip application ONLINE ONLINE testrac12 testrac11:/INST #

Run script 320 (root.sh) on ALL nodes. It will call vipca only once (on the first node), but it implement additional setting on all nodes.

8.4) 340-run-netca.1sh - run Network Configuration Assistant (on single node).

It starts network configuration. Configure listener and exit. Run it on one node only.

File 340-run-netca.1sh:

#!/bin/bash## 340) Run netca to create listener on all nodes#. /etc/profile.d/oracle.sh##sux - -c "netca" oracle &#

Page 49: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 49 iSCSI and ASM shared media7

testrac11:/INST # sh 340*shtestrac11:/INST # Oracle Net Services Configuration:Configuring Listener:LISTENERtestrac11...testrac12...Listener configuration complete.Oracle Net Services configuration successful. The exit code is 0

It opens netca screen:Select CLUSTER configuration.

Select ALL nodes.

Page 50: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 50 iSCSI and ASM shared media7

Select Listener configuration:

Select Add.

Use default Listener Name.Use default protocol and port.Do not configure another listener.

System will complete configuration:

Now, Next and Finish.

Page 51: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 51 iSCSI and ASM shared media7

You have listener configured. Check crs logs:

testrac11:/INST # tail /var/log/crs/*2005-04-07 22:11:41.490: Start of `ora.testrac11.ons` on member `testrac11` succeeded.2005-04-07 22:10:47.636: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-07 22:11:31.183: CRSD-1: [CMDMAIN:2540914048] Restart waiting for Oracle CRSD to start2005-04-07 22:11:41.495: CRSD-1: Complete Restart Application Request2005-04-08 15:55:40.865: [RESOURCE:1413908832] Resource Registered: ora.testrac11.LISTENER_TESTRAC11.lsnr2005-04-08 15:55:42.380: [RESOURCE:1413908832] Resource Registered: ora.testrac12.LISTENER_TESTRAC12.lsnr2005-04-08 15:55:43.052: Attempting to start `ora.testrac11.LISTENER_TESTRAC11.lsnr` on member `testrac11`2005-04-08 15:55:43.346: Start of `ora.testrac11.LISTENER_TESTRAC11.lsnr` on member `testrac11` succeeded.2005-04-08 15:55:43.743: Attempting to start `ora.testrac12.LISTENER_TESTRAC12.lsnr` on member `testrac12`2005-04-08 15:55:44.038: Start of `ora.testrac12.LISTENER_TESTRAC12.lsnr` on member `testrac12` succeeded.testrac11:/INST #

Database server is installed, and you can now create database (and create ASM volumes).

8.5) 350-relink-with-aio.sh – Relink Oracle to turn on Async IO.

Last step – let’s relink Oracle with Async IO. Relink Oracle with Async IO turned on. I use script 350 – just run it on ALL nodes.

File 350-relink-with-aio.sh:

#!/bin/sh. CONFIG.shsu - oracle -c "cd $ORACLE_HOME/rdbms/lib; make PL_ORALIBS=-laio -f ins_rdbms.mk async_on; make PL_ORALIBS=-laio -f ins_rdbms.mk ioracle;echo DONE"

Output:testrac11:/INST # sh 350*shrm -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.ocp /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaio.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.ochmod 755 /opt/oracle/product/10.1.0/db_1/bin

- Linking Oracle rm -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/oraclegcc -o /opt/oracle/product/10.1.0/db_1/rdbms/lib/oracle -L/opt/oracle/product/10.1.0/db_1/rdbms/lib/ -L/opt/oracle/product/10.1.0/db_1/lib/ -L/opt/oracle/product/10.1.0/db_1/lib/stubs/ -Wl,-E `test -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.o && echo /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.o` `test -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/sskgpsmti.o && echo /opt/oracle/product/10.1.0/db_1/rdbms/lib/sskgpsmti.o` /opt/oracle/product/10.1.0/db_1/rdbms/lib/opimai.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/ssoraed.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/ttcsoi.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/defopt.o -Wl,--whole-archive -

Page 52: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 52 iSCSI and ASM shared media7

lperfsrv10 -Wl,--no-whole-archive /opt/oracle/product/10.1.0/db_1/lib/nautab.o /opt/oracle/product/10.1.0/db_1/lib/naeet.o /opt/oracle/product/10.1.0/db_1/lib/naect.o /opt/oracle/product/10.1.0/db_1/lib/naedhs.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/config.o -lserver10 -lodm10 -lnnet10 -lskgxp10 -lhasgen10 -lcore10 -lskgxn2 -locr10 -locrb10 -locrutl10 -lhasgen10 -lcore10 -lskgxn2 -lclient10 -lvsn10 -lcommon10 -lgeneric10 -lknlopt `if /usr/bin/ar tv /opt/oracle/product/10.1.0/db_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap10" ; fi` -ldm10 -lslax10 -lpls10 -lplp10 -lserver10 -lclient10 -lvsn10 -lcommon10 -lgeneric10 -lknlopt -lslax10 -lpls10 -lplp10 -ljox10 -lserver10 /opt/oracle/product/10.1.0/db_1/has/lib/libclsra10.a -ldbcfg10 -locijdbcst10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lmm -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `if /usr/bin/ar tv /opt/oracle/product/10.1.0/db_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo10"; fi` -lctxc10 -lctx10 -lzx10 -lgx10 -lctx10 -lzx10 -lgx10 -lordimt10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -lsnls10 -lunls10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -laio `cat /opt/oracle/product/10.1.0/db_1/lib/sysliblist` -Wl,-rpath,/opt/oracle/product/10.1.0/db_1/lib -lm `cat /opt/oracle/product/10.1.0/db_1/lib/sysliblist` -ldl -lm -L/opt/oracle/product/10.1.0/db_1/libmv -f /opt/oracle/product/10.1.0/db_1/bin/oracle /opt/oracle/product/10.1.0/db_1/bin/oracleOmv /opt/oracle/product/10.1.0/db_1/rdbms/lib/oracle /opt/oracle/product/10.1.0/db_1/bin/oraclechmod 6751 /opt/oracle/product/10.1.0/db_1/bin/oraclechmod 755 /opt/oracle/product/10.1.0/db_1/bin

- Linking Oracle rm -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/oraclegcc -o /opt/oracle/product/10.1.0/db_1/rdbms/lib/oracle -L/opt/oracle/product/10.1.0/db_1/rdbms/lib/ -L/opt/oracle/product/10.1.0/db_1/lib/ -L/opt/oracle/product/10.1.0/db_1/lib/stubs/ -Wl,-E `test -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.o && echo /opt/oracle/product/10.1.0/db_1/rdbms/lib/skgaioi.o` `test -f /opt/oracle/product/10.1.0/db_1/rdbms/lib/sskgpsmti.o && echo /opt/oracle/product/10.1.0/db_1/rdbms/lib/sskgpsmti.o` /opt/oracle/product/10.1.0/db_1/rdbms/lib/opimai.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/ssoraed.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/ttcsoi.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/defopt.o -Wl,--whole-archive -lperfsrv10 -Wl,--no-whole-archive /opt/oracle/product/10.1.0/db_1/lib/nautab.o /opt/oracle/product/10.1.0/db_1/lib/naeet.o /opt/oracle/product/10.1.0/db_1/lib/naect.o /opt/oracle/product/10.1.0/db_1/lib/naedhs.o /opt/oracle/product/10.1.0/db_1/rdbms/lib/config.o -lserver10 -lodm10 -lnnet10 -lskgxp10 -lhasgen10 -lcore10 -lskgxn2 -locr10 -locrb10 -locrutl10 -lhasgen10 -lcore10 -lskgxn2 -lclient10 -lvsn10 -lcommon10 -lgeneric10 -lknlopt `if /usr/bin/ar tv /opt/oracle/product/10.1.0/db_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap10" ; fi` -ldm10 -lslax10 -lpls10 -lplp10 -lserver10 -lclient10 -lvsn10 -lcommon10 -lgeneric10 -lknlopt -lslax10 -lpls10 -lplp10 -ljox10 -lserver10 /opt/oracle/product/10.1.0/db_1/has/lib/libclsra10.a -ldbcfg10 -locijdbcst10

Page 53: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 53 iSCSI and ASM shared media7

`cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lmm -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /opt/oracle/product/10.1.0/db_1/lib/ldflags` -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `if /usr/bin/ar tv /opt/oracle/product/10.1.0/db_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo10"; fi` -lctxc10 -lctx10 -lzx10 -lgx10 -lctx10 -lzx10 -lgx10 -lordimt10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -lsnls10 -lunls10 -lcore10 -lxml10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -laio `cat /opt/oracle/product/10.1.0/db_1/lib/sysliblist` -Wl,-rpath,/opt/oracle/product/10.1.0/db_1/lib -lm `cat /opt/oracle/product/10.1.0/db_1/lib/sysliblist` -ldl -lm -L/opt/oracle/product/10.1.0/db_1/libmv -f /opt/oracle/product/10.1.0/db_1/bin/oracle /opt/oracle/product/10.1.0/db_1/bin/oracleOmv /opt/oracle/product/10.1.0/db_1/rdbms/lib/oracle /opt/oracle/product/10.1.0/db_1/bin/oraclechmod 6751 /opt/oracle/product/10.1.0/db_1/bin/oracleDONE

9. Creating first DATABASE You have, to this point:

- CRS services installed and running;- Database server installed;- Node applications configured and running

testrac11:/INST # crs_stat -tName Type Target State Host ------------------------------------------------------------ora....11.lsnr application ONLINE ONLINE testrac11 ora....c11.gsd application ONLINE ONLINE testrac11 ora....c11.ons application ONLINE ONLINE testrac11 ora....c11.vip application ONLINE ONLINE testrac11 ora....12.lsnr application ONLINE ONLINE testrac12 ora....c12.gsd application ONLINE ONLINE testrac12 ora....c12.ons application ONLINE ONLINE testrac12 ora....c12.vip application ONLINE ONLINE testrac12 testrac11:/INST #

- Raw device allocated and configured for ASM storage:testrac11:/INST # ls -l /dev/rawnamestotal 177drwxr-xr-x 2 root root 224 Apr 6 18:25 .drwxr-xr-x 42 root root 181480 Apr 6 18:24 ..crw-rw---- 1 root disk 162, 3 Apr 6 12:38 CSSFilecrw-r----- 1 root oinstall 162, 2 Apr 6 12:38 OCRFilecrw-rw---- 1 root disk 162, 7 Apr 6 12:38 fcFCdiskcrw-rw---- 1 root disk 162, 6 Apr 6 12:38 fcSATAdiskcrw-rw---- 1 root disk 162, 1 Apr 6 12:38 iASM0crw-rw---- 1 root disk 162, 5 Apr 6 12:38 iASM1crw-rw---- 1 root disk 162, 4 Apr 6 12:38 iBIGASM

It’s time to create first database. We will name it test (because we used SID test1 and test2, and some of our scripts depend of this SID.

Page 54: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 54 iSCSI and ASM shared media7

9.1) Run dbca. (400-create-database.sh) - starts dbca and creates database.

I use script to start dbca, but you can do it manually, from user ‘oracle’,

File 400-create-database.sh:

#!/bin/bash## 400) Create database. See guide below.#. CONFIG.sh. /etc/profile.d/oracle.sh#echo -e "*******************************You are about to create a database\033[31m Use ASM storage. Create \033[1m filesystemio_options=directIO \033[mduring creating database. *******************************"echo -n "Continue__?"read xsux - -c "dbca -datafileDestination +TESTDB" oracle &#waitWe:

- Create database TEST of transactional type;- Create ASM disk group TESTDB, using iSCSI volume;- Start database;- Using OEM DB console, create new TESTLOG group for REDO files;- Using OEM DB console, create new REDO files on this group;- Using OEM DB console, add disk into TESTDB group.

It opens DBCA window:

Select “Real Application Cluster database”:

Page 55: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 55 iSCSI and ASM shared media7

Select “Create a database”; Select all nodes:

I select Transaction processing because I run TPC-C test from http://hammerora.sf.net project.

Page 56: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 56 iSCSI and ASM shared media7

Enter database name and sid name test:

Select Enterprise Manager (you can configure e-mail notification if you want) on next window;

Configure passwords (please, never use passwords manager and tiger!);

Now, select Automatic storage management option:

Page 57: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 57 iSCSI and ASM shared media7

You will be prompted to configure ASM database authentication.

Confirm ASM initialisation:

Page 58: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 58 iSCSI and ASM shared media7

Now, you must configure Disk groups:

Create new; Change disk discovery path:

Page 59: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 59 iSCSI and ASM shared media7

Now you can select from our raw disks:

(If you do not see some disks here, check your /etc/rawnames.conf file, and check all devices created in /dev/rawnames – they must be readable. You can edit /etc/rawnames.conf , run /etc/init.d/rawnames start once again, and then repeat disk scan by running change disc discovery path again. Be sure that all raw devices are the same on all nodes.)

Now I select:o Group name TESTDB;o External redundancy (we have RAID on NetApp);o Disk iASM0 for this disk group.o Click OK.

I create other ASM group for logs:

Page 60: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 60 iSCSI and ASM shared media7

And now I can select this disk group for our test database:

Now we can continue:

Page 61: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 61 iSCSI and ASM shared media7

Now it’s time to specify logs and flash recovery.I will not use Archiving in this test, because I need this database for TPC-C testing. Please, read documentation about Archive Logs, Flash Recovery and RAC before doing any serious installations.

Skip SAMPLES; Configure database services. Pay extra attention to TAF Policy – it specify

how you clients will use few RAC instances (for clients running locally):

Page 62: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 62 iSCSI and ASM shared media7

Click Next; Now I open All Initialisation Parameters and add number of db writing

slaves, because I use software iSCSI which adds extra delay into IO operations, so I want to have few writes in parallel:

Page 63: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 63 iSCSI and ASM shared media7

I configure other parameters here, for example, bigger SGA for my tests:

Page 64: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 64 iSCSI and ASM shared media7

I increase number of REDO and their size here, but dbca is not the bets tool to do it, and you (most likely) will use DB Console or sqlplus to change this setting after starting database.10

10 Redo configuration GUI have some mystical problems – it is extremely slow, and I found it very inconvenient. Fortunately, you have not need to use it – you can use your own template or change REDO logs later using DB console or sqlplus.

Page 65: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 65 iSCSI and ASM shared media7

Now, we can create database:

and

Page 66: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 66 iSCSI and ASM shared media7

Last OK and system began database creation:

Page 67: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 67 iSCSI and ASM shared media7

I use xosview to watch the process. Here is my desktop screen during this installation:

After 5 minutes of waiting, database is created. Notice http address of EM Console (named as dbconsole in emctl command):

Page 68: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 68 iSCSI and ASM shared media7

Click OK and system began starting database instances:

When it completes, database is created, configured and started. Check everything by opening http on port 5500 (/em), for example, http://testrac11:5500/em :

Page 69: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 69 iSCSI and ASM shared media7

9.2) Edit /etc/oratab.

I run script 410 to correct small Oracle inconsistency and edit /etc/oratab. Set up instance names test1 and test2, respectively, and do not set ‘Y’ for RAC instances (they are started by CRS service).

File 410-edit-oratab.sh:

## 410) By some reason, dbca creates oratab in /etc while system need it in /var/opt/oracle#ln -s /etc/oratab /var/opt/oracle/oratabvi /etc/oratabchown oracle:dba /etc/oratab

Output:# Multiple entries with the same $ORACLE_SID are not allowed.###*:/opt/oracle/product/10.1.0/db_1:N+ASM1:/opt/oracle/product/10.1.0/db_1:Ntesti1:/opt/oracle/product/10.1.0/db_1:N

(Use test2 on second node).

Now you can see system status by running ‘rcoracle status’:

testrac12:/INST # rcoracle status | more

#############################################################################

Page 70: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 70 iSCSI and ASM shared media7

# Begin of O R A C L E status section ##############################################################################

Kernel ParametersShared memory: SHMMAX= 3294967296 SHMMNI= 4096 SHMALL= 2097152Semaphore values: SEMMSL, SEMMNS, SEMOPM, SEMMNI: 1250 32000 100 256

Database-InstancesInstance +ASM2 is up \(autostart: N\)Instance test2 is up \(autostart: N\)

TNS-Listener: down

Web-Server (Apache httpd): down (0 processes)

Process list for user oracle: PID TTY STAT TIME COMMAND28109 ? Ss 0:00 /bin/su -l oracle -c exec /opt/oracle/product/10.1.0/crs_1/bin/evmd 28219 ? S 0:00 /bin/su -l oracle -c /opt/oracle/product/10.1.0/crs_1/bin/ocssd || exit 13728220 ? S 0:00 -su -c /opt/oracle/product/10.1.0/crs_1/bin/ocssd || exit 13728246 ? Ss 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28271 ? Ss 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28277 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28279 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28291 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28292 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28293 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28294 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28295 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmlogger.bin -o /opt/oracle/product/10.1.0/crs_1/evm/log/evmlogger.28296 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28297 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28298 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28299 ? S 0:07 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28300 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28309 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28310 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28315 ? S 0:24 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28316 ? S 0:01 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28317 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28318 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28319 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28320 ? S 0:05 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28321 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28322 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin28324 ? S 0:01 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin28610 ? Ss 0:00 /opt/oracle/product/10.1.0/db_1/opmn/bin/ons -d28611 ? S 0:00 /opt/oracle/product/10.1.0/db_1/opmn/bin/ons -d31759 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin31760 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/ocssd.bin31763 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmlogger.bin -o /opt/oracle/product/10.1.0/crs_1/evm/log/evmlogger.31957 ? S 0:00 /opt/oracle/product/10.1.0/crs_1/bin/evmd.bin 4696 ? Ss 0:00 /opt/oracle/product/10.1.0/db_1/bin/tnslsnr LISTENER_TESTRAC12 -inherit 9807 ? Ss 0:00 asm_pmon_+ASM2 9809 ? Ss 0:00 asm_diag_+ASM2 9811 ? Ss 0:00 asm_lmon_+ASM2 9813 ? Ss 0:00 asm_lmd0_+ASM2 9815 ? Ss 0:00 asm_lms0_+ASM2 9817 ? Ss 0:00 asm_mman_+ASM2 9819 ? Ss 0:00 asm_dbw0_+ASM2 9821 ? Ss 0:00 asm_lgwr_+ASM2 9823 ? Ss 0:00 asm_ckpt_+ASM2 9825 ? Ss 0:00 asm_smon_+ASM2 9827 ? Ss 0:00 asm_rbal_+ASM2 9829 ? Ss 0:00 asm_lck0_+ASM2 9847 ? S 0:00 /opt/oracle/product/10.1.0/db_1/bin/racgimon daemon ora.testrac12.ASM2.asm14029 ? Ss 0:00 asm_pz99_+ASM214463 ? S 0:00 /opt/oracle/product/10.1.0/db_1/perl/bin/perl /opt/oracle/product/10.1.0/db_1/bin/emwd.pl dbconsole /opt/15343 ? Ss 0:00 asm_pz98_+ASM215474 ? S 0:00 /opt/oracle/product/10.1.0/db_1/bin/emagent

Page 71: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 71 iSCSI and ASM shared media7

15532 ? Ss 0:00 oracle+ASM2 (LOCAL=NO)16076 ? Ss 0:00 ora_pmon_test216078 ? Ss 0:00 ora_diag_test216080 ? Ss 0:00 ora_lmon_test216082 ? Ss 0:01 ora_lmd0_test216084 ? Ss 0:00 ora_lms0_test216086 ? Ss 0:00 ora_lms1_test216088 ? Ss 0:00 ora_mman_test216090 ? Ss 0:00 ora_dbw0_test216092 ? Ss 0:00 ora_lgwr_test216094 ? Ss 0:00 ora_ckpt_test216096 ? Ss 0:00 ora_smon_test216098 ? Ss 0:00 ora_reco_test216100 ? Ss 0:00 ora_cjq0_test216102 ? Ss 0:00 ora_d000_test216104 ? Ss 0:00 ora_s000_test216122 ? Ss 0:01 ora_lck0_test216128 ? Ss 0:00 ora_asmb_test216130 ? Ss 0:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))16132 ? Ss 0:00 ora_rbal_test216140 ? S 0:25 /opt/oracle/product/10.1.0/db_1/jdk/bin/java -server -Xmx512M -XX:MaxPermSize=64m -XX:MinHeapFreeRatio=2016158 ? Ss 0:00 ora_o001_test216175 ? Ss 0:00 ora_i201_test216177 ? Ss 0:00 ora_qmnc_test216181 ? Ss 0:01 oracletest2 (LOCAL=NO)16183 ? Ss 0:00 oracletest2 (LOCAL=NO)16185 ? Ss 0:00 oracletest2 (LOCAL=NO)16187 ? Ss 0:00 ora_mmon_test216189 ? Ss 0:00 oracletest2 (LOCAL=NO)16191 ? Ss 0:00 ora_mmnl_test216193 ? Ss 0:00 oracletest2 (LOCAL=NO)16195 ? Ss 0:00 oracletest2 (LOCAL=NO)16197 ? Ss 0:03 oracletest2 (LOCAL=NO)16199 ? Ss 0:00 oracletest2 (LOCAL=NO)16201 ? Ss 0:07 oracletest2 (LOCAL=NO)16203 ? Ss 0:00 oracletest2 (LOCAL=NO)16205 ? Ss 0:01 oracletest2 (LOCAL=NO)16233 ? S 0:00 /opt/oracle/product/10.1.0/db_1/bin/racgimon daemon ora.test.test2.inst16239 ? Ss 0:00 ora_pz99_test216248 ? Ss 0:00 oracletest2 (LOCAL=NO)16250 ? Ss 0:05 oracletest2 (LOCAL=NO)16252 ? Ss 0:02 oracletest2 (LOCAL=NO)16254 ? Ss 0:00 oracletest2 (LOCAL=NO)16256 ? Ss 0:01 oracletest2 (LOCAL=NO)16258 ? Ss 0:00 ora_i101_test216260 ? Ss 0:00 ora_i102_test216322 ? Ss 0:00 oracletest2 (LOCAL=NO)16324 ? Ss 0:01 oracletest2 (LOCAL=NO)16326 ? Ss 0:00 ora_pz98_test216533 ? Ss 0:01 oracletest2 (LOCAL=NO)16536 ? Ss 0:01 oracletest2 (LOCAL=NO)16781 ? Ss 0:00 ora_q001_test217378 ? Ss 0:00 oracletest2 (LOCAL=NO)17380 ? Ss 0:00 oracletest2 (LOCAL=NO)28111 ? Ss 0:07 crsd.bin28233 ? Zs 0:00 crsd.bin <defunct>28110 ? Ss 0:00 init.cssd28271 ? Ss 0:00 ocssd.bin28309 ? S 0:00 ocssd.bin28310 ? S 0:00 ocssd.bin28315 ? S 0:24 ocssd.bin28316 ? S 0:01 ocssd.bin28317 ? S 0:00 ocssd.bin28318 ? S 0:00 ocssd.bin28319 ? S 0:00 ocssd.bin28320 ? S 0:05 ocssd.bin28321 ? S 0:00 ocssd.bin28322 ? S 0:00 ocssd.bin31759 ? S 0:00 ocssd.bin31760 ? S 0:00 ocssd.bin2005-04-07 22:10:50.911: Attempting to start `ora.testrac11.vip` on member `testrac12`2005-04-07 22:10:56.306: Start of `ora.testrac11.vip` on member `testrac12` succeeded.2005-04-07 22:10:56.308: [MEMBERLEAVE:1413908832] Do failover for: testrac11

Page 72: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 72 iSCSI and ASM shared media7

2005-04-07 22:10:56.309: [MEMBERLEAVE:1413908832] Post recovery done evmd event for: testrac112005-04-08 18:55:54.899: Attempting to start `ora.test.db` on member `testrac11`2005-04-08 18:55:55.043: Start of `ora.test.db` on member `testrac11` succeeded.2005-04-08 18:56:04.471: Attempting to start `ora.test.test.test1.srv` on member `testrac11`2005-04-08 18:56:04.615: Attempting to start `ora.test.test.test2.srv` on member `testrac12`2005-04-08 18:56:04.918: Start of `ora.test.test.test1.srv` on member `testrac11` succeeded.2005-04-08 18:56:05.019: Start of `ora.test.test.test2.srv` on member `testrac12` succeeded.

############################################################################## End of O R A C L E section ##############################################################################

10. Basic RAC management. 11

10.1) CRS logs.

You can watch CRS activity by looking into CRS log file (/opt/oracle/product/10.1.0/crs_1/crs/logs/*, which we symlinked into /var/log/crs for simplicity):

tail –f /var/log/crs/* Output:testrac11:/INST # tail -20 /var/log/crs/*2005-04-08 15:55:44.038: Start of `ora.testrac12.LISTENER_TESTRAC12.lsnr` on member `testrac12` succeeded.2005-04-08 18:07:21.027: [RESOURCE:1397131616] Resource Registered: ora.testrac11.ASM1.asm2005-04-08 18:07:21.978: Attempting to start `ora.testrac11.ASM1.asm` on member `testrac11`2005-04-08 18:07:23.045: Start of `ora.testrac11.ASM1.asm` on member `testrac11` succeeded.2005-04-08 18:07:24.115: [RESOURCE:1397131616] Resource Registered: ora.testrac12.ASM2.asm2005-04-08 18:07:25.372: Attempting to start `ora.testrac12.ASM2.asm` on member `testrac12`2005-04-08 18:07:29.415: Start of `ora.testrac12.ASM2.asm` on member `testrac12` succeeded.2005-04-08 18:45:40.011: [RESOURCE:1455851872] Resource Registered: ora.test.db2005-04-08 18:45:41.685: [RESOURCE:1455851872] Resource Registered: ora.test.test1.inst2005-04-08 18:45:43.415: [RESOURCE:1455851872] Resource Registered: ora.test.test2.inst2005-04-08 18:48:27.172: [RESOURCE:1439074656] Resource Registered: ora.test.test.cs2005-04-08 18:48:28.418: [RESOURCE:1439074656] Resource Registered: ora.test.test.test2.srv2005-04-08 18:48:29.426: [RESOURCE:1439074656] Resource Registered: ora.test.test.test1.srv2005-04-08 18:55:35.455: `ora.test.test1.inst` is already OFFLINE.2005-04-08 18:55:42.744: Attempting to start `ora.test.test1.inst` on member `testrac11`

11 This information is provided for unskilled DBA only, and only for general orientation. You should read Oracle documentation about RAC cluster to manage any production grade database.

Page 73: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 73 iSCSI and ASM shared media7

2005-04-08 18:55:42.893: Attempting to start `ora.test.test2.inst` on member `testrac12`2005-04-08 18:56:12.085: Start of `ora.test.test2.inst` on member `testrac12` succeeded.2005-04-08 18:56:20.990: Start of `ora.test.test1.inst` on member `testrac11` succeeded.2005-04-08 18:56:21.498: Attempting to start `ora.test.test.cs` on member `testrac12`2005-04-08 18:56:22.679: Start of `ora.test.test.cs` on member `testrac12` succeeded.testrac11:/INST #

10.2) EM Console.

DM console is web interface to Oracle Enterprise Management , running (for first database) on port 5500, path /em, (dbca reports port number for other databases). It is pretty useful for everything, except emergency works, recoveries and other activities performed with database in non-functional mode (because OEM depends on database for functioning). I recommend even to create small test database, never touch it, and use it to run EM, which provides you access to ASM instance as well as to ‘test’ instance. See example above (in previous section).

10.3) Starting, stopping and managing.

Clustered database is started and stopped by Cluster Ready Services (CRS) manager, so our start / stop script only starts and stops CRS processes. Moreover, Cluster Ready Services and 2 other components are installed by Oracle installation into /etc/inittab, and are automatically started by system init when it completes runlevel3 and runlevel5 initialisation:

h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/nullh2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/nullh3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

- evmd is process responsible for event exchange between cluster members;- cssd is cache synchronisation services daemon;- crsd is cluster ready services daemon, responsible for starting / stopping

everything.

If init cannot start any of these daemons few times in a line, it suspends future attempts for 5 minutes. System script, /etc/init.d/init.crs, control flags, allowing these 3 processes to start, and sends a signal to running scripts when they are requested to stop. As a result, when you stopped CRS (or when it dies by any reason) and want to restart it, after restart is allowed, you must kill –1 1 to speed-up process respawning.

I recommend to link script, provided by Oracle, with root command rccrs:

ln -s /etc/init.d/init.crs /sbin/rccrs

Do not run other 3 (/etc/init.d/init.*) scripts provided by Oracle – they are started / stopped by init and controlled with init.crs script.

We already linked log directory for crs as /var/log/crs, so we can watch crs logs as

Page 74: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 74 iSCSI and ASM shared media7

tail –f /var/log/crs/*

You should know at least srvctl oracle command for very initial RAC management. For example, run:

testrac11:~ # srvctl status database -d testInstance test1 is running on node testrac11Instance test2 is running on node testrac12

You can use our (modifying) rcoracle (/etc/init.d/oracle) script to start, stop and status-request database:

rcoracle stop – stops CRS and all cluster database instances;rcoracle start – starts CRS (which will start instances). rcoracle status – shows status;rcoracle kill – dumb brute force killing all oracle processes including any oracle user processes. Used as a last resort chance to release busy resource (NFS disk for example).

10.4) Verify system shutdown and startup.

Verify system shutdown and startup in a few scenarios BEFORE using newrly created database for any real data. Do not forget to test all scenarios, for example:

Server reboot, in 2 cases:o when 1 node reboots while other node runs normally;o when both nodes reboots at the same time.

Server crash, the same scenarios. Oracle stop and start, without system reboot); Filer controller failure (with cluster takeover) and giveback; One of filer network connections fail, then restores; Recovery from backup.

We installed (by orarun package and then by our script) rcoracle script (link to /etc/init.d/oracle), which is controlled by /etc/sysconfig/oracle configuration. Try stopping and starting database on one of the servers:

Problems.Problems, which was not resolved in time of writing this document.

1) Enterprise Manager refuse to recognize ASM instance – I do not see ‘Disk groups’ on ‘Management’ screen of database, and when I try to open ASM instance thru cluster (as recommended in tutorial), I got Internal Error from Java. It can rely to x86_64 hardware platform, Linux version or something else. Still troubleshooting.

Page 75: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 75 iSCSI and ASM shared media7

References:1. http://www.novell.com/products/linuxenterpriseserver/oracle/documents.html

- Oracle installation guides from SUSE (extremely useful set of documents);2. http://www.netapp.com/tech_library/3339.html - Oracle 10g RAC installation

with RedHat Linux and Network Appliance filer (from NetApp). This document has one little bug – you should never use noac option for ORACLE_HOME volume.

3. http://www.oracle-base.com/index.php - numerous articles about Oracle 10g;4. www.oracle.com - Oracle main web site (with links to Oracle Technology

network, Metalink and other useful resources).5. http://oss.oracle.com – Oracle and Linux project site;6. http://lists.suse.com/archive/suse-oracle/ - SUSE and Oracle mailing list.

APPENDIX 1. Modified orarun scripts.These files are slightly modified components of ‘orarun’ package, which is part of SUSE Linux Enterprise Server packages. I hope to eliminate their modification with future orarun package. I print them here only to allow full understanding of this installation process.

File /etc/profile.d/oracle.sh (FILE.d/oracle.sh):

## These is special version of oracle.sh file, modified for Oracle 10g RAC installation on SLES9# Do not use it in other cases.## Login environment variable settings for Oracle# The code below is done ONLY if the user is "oracle": # Set the ULIMITs for the shell and add gcc_old to PATH if it's installed#

# Get settings, if file(s) exist(s). If not, we simply use defaults. if test -f /etc/sysconfig/oracle; then # new location as of SL 8.0 is directory /etc/sysconfig/ . /etc/sysconfig/oracle else if test -f /etc/rc.config.d/oracle.rc.config; then # location is directory /etc/rc.config.d/ . /etc/rc.config.d/oracle.rc.config else if test -f /etc/rc.config; then # old SuSE location was to have everything in one file . /etc/rc.config fi fi fi

ORACLE_SID=test1

ORACLE_BASE=/opt/oracle ORACLE_HOME=$ORACLE_BASE/product/10.1.0/db_1

TNS_ADMIN=$ORACLE_HOME/network/admin ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data PATH=$PATH:$ORACLE_HOME/bin LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$ORACLE_HOME/lib:$ORACLE_HOME/ctx/lib CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib

Page 76: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 76 iSCSI and ASM shared media7

export ORACLE_BASE ORACLE_HOME ORA_NLS33 ORACLE_SID PATH LD_LIBRARY_PATH CLASSPATH TNS_ADMIN

# ORACLE_TERM=xterm; export ORACLE_TERM # NLS_LANG=AMERICAN_AMERICA.UTF8; export NLS_LANG

# # This requires the limits to have been increased by root # e.g. at boot time by the /etc/rc.d/oracle script, both # ulimit and kernel parameters. #

# Add package "gcc_old" gcc 2.95.3 to path - FIRST #if test -x /opt/gcc295/bin/gcc; then # export PATH=/opt/gcc295/bin:$PATH #fi

# RAC (cluster) component GSD commands don't run when this is # set - which it is if a SuSE Java package is installed. unset JAVA_BINDIR JAVA_HOME

# Set ulimits: # # We suppress any warning messages, so if the hard limits have not been # increased by root and the commands don't work we keep silent... # This is because the only one who needs it is the shell that starts # the DB server processes, and the number of warning messages created # here is potentially way too much and confusing

# core dump file size ulimit -c ${MAX_CORE_FILE_SIZE_SHELL:-0} 2>/dev/null

# max number of processes for user ulimit -u ${PROCESSES_MAX_SHELL:-16384} 2>/dev/null

# max number of open files for user ulimit -n ${FILE_MAX_SHELL:-65536} 2>/dev/null

export LD_ASSUME_KERNEL=2.4.21

File /etc/sysconfig/oracle (FILE.d/sysconfig.oracle):

## Note: the environment variables are set in /etc/profile.d/oracle.[c]sh# Owner of Oracle installation; Oracle will be started as that user# # This is file for ORACLE 10 G CRS system# Use original file (from orarun) for Oracle9 and Oracle8#ORACLE_OWNER="oracle"

## Start Oracle Cluster Filesystem (for RAC)#START_ORACLE_DB_OCFS="yes"

## Start Cluster Ready Services (for RAC10g; notice - they are started by inittab, so this line do not influence original start)# Setting NO do not turn it off, but prevents this script from stopping and restarting CRS#START_ORACLE_DB_CRS="yes"

Page 77: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 77 iSCSI and ASM shared media7

## Start Enterprise manager for Oracle10#START_ORACLE_DB_EM="yes"

# # Set the KERNEL PARAMETERS for Oracle. Requires a 2.4 kernel, in 2.2# kernels only SHMMAX can be set during runtime via /proc.# # DO NOT CHANGE ANY VALUES unless you KNOW what you are doing and why!!!# # Have a look at the Oracle ReleaseNotes for the Oracle product you are# using for how to set these values. If you do not set them we will# assume some reasonable defaults for a medium Oracle 9i database# system (that's a pretty big and busy one!).# # The /proc filesystem provides access to kernel parameters and statistics# and the /proc/sys/ system allows one to change some kernel settings# during runtime.# If you have the kernel sources installed (package kernel-source)# You can find more information here: # /usr/src/linux/Documentation/sysctl/ (directory)# /usr/src/linux/Documentation/filesystems/proc.txt# /usr/src/linux/Documentation/networking/ip-sysctl.txt# SET_ORACLE_KERNEL_PARAMETERS="yes"

# # Max. shared memory segment size that can be allocated.# The ONLY parameter you should touch at all (and only if you have more than# 4 GB of physical memory) is SHMMAX. It does not make any sense to change# the others for Oracle!# Kernel sources header file: /usr/src/linux/include/linux/shm.h# Recommended: SHMMAX = 0.5*(physical memory). Higher # values are okay,# since this parameter only sets the OS maximum:# This setting does not affect how much shared memory is needed or used by# Oracle8i or the operating system. It is used only to indicate the maximum# allowable size. This setting also does not impact operating system# kernel resources. # The values for SHMSEG and SHMMIN cannot be changed via the proc-interface,# but there is no need to change anything anyway!# SHMSEG (default: 4096): max. number of shared segments per process# SHMMIN (default: 1): min. size of a shared mem. segment in bytes# # SHMMAX max. size of a shared memory segment in bytes# SHMMAX=3294967296# # SHMMNI (default: 4096): max. number of shared segments system wide# No change is needed for running Oracle!# SHMMNI=4096# # SHMALL (default: 8G [2097152]): max. shm system wide (pages)# No change is needed for running Oracle!# SHMALL=2097152

# # Sempahore values# Kernel sources header file: /usr/src/linux/include/linux/sem.h# # SEMVMX: semaphore maximum value. Oracle recommends a value of 32767,# which is the default in SuSE *and* the maximum value possible.# This value cannot be changed during runtime via the /proc interface,# but there is no need to do so anyway!# # SEMMSL: max. number of semaphores per id. Set to 10 plus the largest# PROCESSES parameter of any Oracle database on the system (see init.ora).# Max. value possible is 8000.# SEMMSL=1250# # SEMMNS: max. number of semaphores system wide. Set to the sum of the# PROCESSES parameter for each Oracle database, adding the largest one# twice, then add an additional 10 for each database (see init.ora).

Page 78: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 78 iSCSI and ASM shared media7

# Max. value possible is INT_MAX (largest INTEGER value on this# architecture, on 32-bit systems: 2147483647).# SEMMNS=32000# # SEMOPM: max. number of operations per semop call. Oracle recommends# a value of 100. Max. value possible is 1000.# SEMOPM=100# # SEMMNI: max. number of semaphore identifiers. Oracle recommends a # a value of (at least) 100. Max. value possible is 32768 (defined# in include/linux/ipc.h: IPCMNI)# SEMMNI=256

## Defines the local port range that is used by TCP and UDP to# choose the local port. The first number is the first, the# second the last local port number. Default value depends on# amount of memory available on the system:# > 128Mb 32768-61000# < 128Mb 1024-4999 or even less.# This number defines number of active connections, which this# system can issue simultaneously to systems not supporting# TCP extensions (timestamps). With tcp_tw_recycle enabled# (i.e. by default) range 1024-4999 is enough to issue up to# 2000 connections per second to systems supporting timestamps.#IP_LOCAL_PORT_RANGE="1024 65000"

## The *_SHELL settings are for the Oracle startup script (/etc/rc.d/oracle# and 'rcoracle') *ONLY*, it does NOT have any influence on the# limits if you login as user 'oracle' and start Oracle from there!!!# This sets the limits for the number of open files and processes.# FILE_MAX_SHELL *MUST* be lower than FILE_MAX_KERNEL, obviously#FILE_MAX_KERNEL=131072FILE_MAX_SHELL=65536PROCESSES_MAX_SHELL=16384MAX_CORE_FILE_SIZE_SHELL=unlimited

## By Andrea Arcangeli, SuSE:# This decreases the swappiness of the kernel. It will tend to swap less. It# will shrink the pagecache more, before falling back into swap. So# increasing the mapped ratio will result in less cache and less swap.# On a lowmemory machine reducing the cache, and the swap can decrease# performance.# On a database machine with plenty of ram, swapping some hundred mbyte# instead may not be necessary, better to shrink the cache, in particular# because having that much shm allocated tends to fool the VM. The VM# can't know if the shm is fs cache too (the shm in Oracle is mostly cache# for the filesystem).# So going to 1000 is probably a good idea for high end servers with# plenty of memory. Using "1000" make sense where you really know swapping# is going to be not necessary during all the important workloads because# you tune the machine in a way that it has enough ram to succeed w/o the# need of swap. Using 1000 tells the VM to swap less.#VM_MAPPED_RATIO=1000

## Max. size of an async I/O request#AIO_MAX_SIZE=262144

File /etc/init.d/oracle (FILES.d/init.oracle):

Page 79: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 79 iSCSI and ASM shared media7

#! /bin/sh# Copyright (c)1995 SuSE GmbH Nuernberg, Germany.# # Author: SuSE Oracle Team <[email protected]># Homepage: http://www.suse.com/oracle/# ### BEGIN INIT INFO# Provides: oracle# Required-Start: $network $syslog $remote_fs raw# Required-Stop:# Default-Start: 3 5# Default-Stop: 0 1 2 6# Description: Start the Oracle database### END INIT INFO

# Shell functions sourced from /etc/rc.status:# rc_check check and set local and overall rc status# rc_status check and set local and overall rc status# rc_status -v ditto but be verbose in local rc status# rc_status -v -r ditto and clear the local rc status# rc_failed set local and overall rc status to failed# rc_reset clear local rc status (overall remains)# rc_exit exit appropriate to overall rc status. /etc/rc.status

# catch mis-use right here at the startif [ "$1" != "start" -a "$1" != "stop" -a "$1" != "status" -a "$1" != "restart" -a "$1" != "kill" ]; then echo "Usage: $0 {start|stop|status|restart}" exit 1fi

LOAD_OCFS=/sbin/load_ocfsMOUNT=/bin/mountUMOUNT=/bin/umountMKDIR=/bin/mkdirRMMOD=/sbin/rmmod

CHECKPROC="/sbin/checkproc"test -x "$CHECKPROC" || CHECKPROC="test -x "

# Get settings, if file(s) exist(s). If not, we simply use defaults.if test -f /etc/sysconfig/oracle; then # new location as of SL 8.0 is directory /etc/sysconfig/ . /etc/sysconfig/oracleelse if test -f /etc/rc.config.d/oracle.rc.config; then # location is directory /etc/rc.config.d/ . /etc/rc.config.d/oracle.rc.config else if test -f /etc/rc.config; then # old SuSE location was to have everything in one file . /etc/rc.config fi fifi

# Determine the base and follow a runlevel link name.# DISABLED by default because it's very individual...#base=${0##*/}#link=${base#*[SK][0-9][0-9]}# Force execution if not called by a runlevel directory.#test $link = $base && START_ORACLE_DB="yes" && START_ORACLE_DB_LISTENER="yes" && SET_ORACLE_KERNEL_PARAMETERS="yes"

# First reset status of this servicerc_reset

#

Page 80: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 80 iSCSI and ASM shared media7

# Get and check environment (e.g. ORACLE_HOME)# ora_environment(){ test -f /etc/profile.d/oracle.sh && . /etc/profile.d/oracle.sh if [ ! -z "$ORACLE_HOME" -a ! -d "$ORACLE_HOME" ]; then echo echo "${warn}ORACLE_HOME directory $ORACLE_HOME does not exist!$norm" echo "Unsetting ORACLE_HOME, will try to determine it from system..." unset ORACLE_HOME fi

# Try /etc/oratab if it's not set in /etc/profile.d/oracle.sh test -z "$ORACLE_HOME" && test -f /etc/oratab && \ ORACLE_HOME=`awk -F: '/^[^#].*:.+:[YN]/ {if ($2!="") print $2; exit}' </etc/oratab` && \ echo && echo "ORACLE_HOME not set, but I found this in /etc/oratab: $ORACLE_HOME" && echo

if [ -z "$ORACLE_HOME" ]; then echo "${warn}ORACLE_HOME environment variable not set.$norm" echo "Check /etc/profile.d/oracle.sh and /etc/oratab" fi

if [ ! -d "$ORACLE_HOME" ]; then echo "${warn}Cannot find ORACLE_HOME directory $ORACLE_HOME.$norm" echo "Environment settings are wrong! Check /etc/profile.d/oracle.sh" fi

test -z "$ORACLE_OWNER" && ORACLE_OWNER="oracle"

if [ "$1" = "start" ]; then echo -n " ${extd}SETTINGS $1 from /etc/sysconfig/oracle$norm" if [ ! -f /etc/sysconfig/oracle ]; then echo " - ${warn}!!! MISSING !!!$norm" else echo fi echo " - Set Kernel Parameters for Oracle: ${SET_ORACLE_KERNEL_PARAMETERS:-no}" echo " - Start Oracle OCFS: ${START_ORACLE_DB_OCFS:-no}" echo " - Start Oracle CRS: ${START_ORACLE_DB_CRS:-no}" echo " - Start Oracle EM: ${START_ORACLE_DB_EM:-no}" fi}

# Here we finally get to do the real work.case "$1" in start) echo echo "#############################################################################" echo "# Begin of O R A C L E startup section #" echo "#############################################################################" echo ora_environment start

# # Check if we really have all the Oracle components we are told to start # if [ ! -x $ORACLE_HOME/bin/emctl -a ${START_ORACLE_DB_EM:-no} = "yes" ]; then echo "${warn}Can't find needed file: emctl - Setting START_ORACLE_DB_EM = no $norm" START_ORACLE_DB_EM="cannot"; fi

if [ ! -x /etc/init.d/init.crs -a ${START_ORACLE_DB_CRS:-no} = "yes" ]; then echo "${warn}Can't find needed file: init.crs - Setting START_ORACLE_DB_CRS = no $norm" START_ORACLE_DB_CRS="cannot"; fi

Page 81: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 81 iSCSI and ASM shared media7

if [ ! -x /sbin/load_ocfs -a ${START_ORACLE_DB_OCFS:-no} = "yes" ]; then echo "${warn}Can't find needed file: /sbin/load_ocfs - Setting START_ORACLE_DB_OCFS = no $norm" START_ORACLE_DB_OCFS="cannot"; fi

echo

# Set kernel parameters for Oracle if [ "${SET_ORACLE_KERNEL_PARAMETERS:-no}" == "yes" ]; then echo echo "Setting kernel parameters for Oracle, see file" if test -f /etc/sysconfig/oracle; then echo "/etc/sysconfig/oracle for explanations." else echo "/etc/rc.config.d/oracle.rc.config for explanations." fi echo

if [ ! -d /proc/sys/kernel ]; then echo; echo "No sysctl kernel interface - cannot set kernel parameters."; echo rc_failed else # Set shared memory parameters echo -n "${extd}Shared memory:$norm " test -f /proc/sys/kernel/shmmax && echo -n " SHMMAX=${SHMMAX:-3294967296}" test -f /proc/sys/kernel/shmmax && echo ${SHMMAX:-3294967296} > /proc/sys/kernel/shmmax test -f /proc/sys/kernel/shmmni && echo -n " SHMMNI=${SHMMNI:-4096}" test -f /proc/sys/kernel/shmmni && echo ${SHMMNI:-4096} > /proc/sys/kernel/shmmni test -f /proc/sys/kernel/shmall && echo " SHMALL=${SHMALL:-2097152}" test -f /proc/sys/kernel/shmall && echo ${SHMALL:-2097152} > /proc/sys/kernel/shmall test -f /proc/sys/kernel/shmall || echo

# Set the semaphore parameters: # see Oracle release notes for Linux for how to set these values # SEMMSL, SEMMNS, SEMOPM, SEMMNI echo -n "${extd}Semaphore values:$norm " test -f /proc/sys/kernel/sem && echo -n " SEMMSL=${SEMMSL:-1250}" test -f /proc/sys/kernel/sem && echo -n " SEMMNS=${SEMMNS:-32000}" test -f /proc/sys/kernel/sem && echo -n " SEMOPM=${SEMOPM:-100}" test -f /proc/sys/kernel/sem && echo " SEMMNI=${SEMMNI:-256}" test -f /proc/sys/kernel/sem && echo ${SEMMSL:-1250} ${SEMMNS:-32000} ${SEMOPM:-100} ${SEMMNI:-128} > /proc/sys/kernel/sem test -f /proc/sys/kernel/sem || echo

echo -n "${extd}Other values:$norm " test -f /proc/sys/fs/file-max && echo -n " FILE_MAX_KERNEL=${FILE_MAX_KERNEL:-131072}" test -f /proc/sys/fs/file-max && echo ${FILE_MAX_KERNEL:-131072} > /proc/sys/fs/file-max test -f /proc/sys/net/ipv4/ip_local_port_range && echo " IP_LOCAL_PORT_RANGE=${IP_LOCAL_PORT_RANGE:-"1024 65000"}" test -f /proc/sys/net/ipv4/ip_local_port_range && echo ${IP_LOCAL_PORT_RANGE:-"1024 65000"} > /proc/sys/net/ipv4/ip_local_port_range test -f /proc/sys/vm/vm_mapped_ratio && echo -n " VM_MAPPED_RATIO=${VM_MAPPED_RATIO:-250}" test -f /proc/sys/vm/vm_mapped_ratio && echo ${VM_MAPPED_RATIO:-250} > /proc/sys/vm/vm_mapped_ratio test -f /proc/sys/fs/aio-max-size && echo " AIO_MAX_SIZE=${AIO_MAX_SIZE:-131072}" test -f /proc/sys/fs/aio-max-size && echo ${AIO_MAX_SIZE:-131072} > /proc/sys/fs/aio-max-size test -f /proc/sys/fs/aio-max-size || echo

echo -n "${extd}ULIMIT values:$norm " echo " MAX_CORE_FILE_SIZE_SHELL=${MAX_CORE_FILE_SIZE_SHELL:-0}" ulimit -c ${MAX_CORE_FILE_SIZE_SHELL:-0} echo -n " FILE_MAX_SHELL=${FILE_MAX_SHELL:-65536}" ulimit -n ${FILE_MAX_SHELL:-65536} echo " PROCESSES_MAX_SHELL=${PROCESSES_MAX_SHELL:-16384}" ulimit -u ${PROCESSES_MAX_SHELL:-16384}

Page 82: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 82 iSCSI and ASM shared media7

# Check if shmmax is really set to what we want - on some systems and # certain settings the result could be shmmax=0 if you set it to e.g. 4GB! if [ `cat /proc/sys/kernel/shmmax` != "${SHMMAX:-3294967296}" ]; then echo "${warn}---- WARNING - SHMMAX could not be set properly ----$norm" echo " Tried to set it to: ${SHMMAX:-3294967296}" echo " Value is now: `cat /proc/sys/kernel/shmmax`" echo " You might try again with a lower value." fi fi echo echo -n "Kernel parameters set for Oracle: " rc_status -v echo echo fi

rc_reset

echo -n " - Starting Oracle Cluster Filesystem..." if [ "${START_ORACLE_DB_OCFS:-no}" = "yes" ]; then $LOAD_OCFS >& /dev/null rc_status -v -r echo -n " - Mounting Oracle Cluster Filesystem(s)..." $MOUNT -a -t ocfs >& /dev/null rc_status -v -r else if [ ${START_ORACLE_DB_OCFS:-no} = "cannot" ]; then rc_status -s else rc_status -u fi fi

rc_reset

echo -n " - Starting Oracle CRS..." if [ "${START_ORACLE_DB_CRS:-no}" = "yes" ]; then /etc/init.d/init.crs start >& /dev/null rc_status -v -r sleep 2 kill -1 1 else if [ ${START_ORACLE_DB_CRS:-no} = "cannot" ]; then rc_status -s else rc_status -u fi fi

rc_reset

echo

echo -n " - Starting Oracle EM dbconsole..." if [ "${START_ORACLE_DB_EM:-no}" = "yes" ]; then su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $ORACLE_HOME/bin/emctl start dbconsole >& /dev/null &" rc_status -v -r else if [ ${START_ORACLE_DB_EM:-no} = "cannot" ]; then rc_status -s else rc_status -u fi fi

rc_reset ;;

stop|kill) echo echo "#############################################################################" echo "# Begin of O R A C L E shutdown section

Page 83: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 83 iSCSI and ASM shared media7

#" echo "#############################################################################" echo ora_environment stop

echo "Shutting down Oracle services (only those running)"; echo

if test -x $ORACLE_HOME/Apache/Apache/bin/apachectl && $CHECKPROC $ORACLE_HOME/Apache/Apache/bin/httpd then echo -n "Shutting down Apache: " export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $ORACLE_HOME/Apache/Apache/bin/apachectl stop >& /dev/null; rc_status -v -r fi if test -x $AGENT_PROG && $CHECKPROC $ORACLE_HOME/bin/dbsnmp then echo -n "Shutting down Agent: "

su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $AGENT_STOP >& /dev/null"; rc_status -v -r fi if [ -x $ORACLE_HOME/bin/emctl ] then echo -n

echo -n "Shutting down EM console: " su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN;

emctl stop dbconsole " & sleep 20killall emagentkillall emdctlrc_status -v -r

fi test -x $ORACLE_HOME/bin/cmctl && echo -n && echo -n "Shutting down Connection Manager: " && (su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $ORACLE_HOME/bin/cmctl stop >& /dev/null"; rc_status -v -r) test -x $ORACLE_HOME/bin/lsnrctl && $CHECKPROC $ORACLE_HOME/bin/tnslsnr && echo -n "Shutting down Listener: " && (su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $ORACLE_HOME/bin/lsnrctl stop >& /dev/null & "; rc_status -v -r) test -x $ORACLE_HOME/bin/dbshut && $CHECKPROC $ORACLE_HOME/bin/oracle && echo -n "Shutting down Database: " && (su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; $ORACLE_HOME/bin/dbshut >& /dev/null"; rc_status -v -r) test -x $ORACLE_HOME/bin/gsdctl && test "" != "`ps U oracle|grep 'DPROGRAM=gsd'`" && echo -n "Shutting down GSD: " && (su - $ORACLE_OWNER -c "export ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN; unset JAVA_BINDIR; unset JAVA_HOME; $ORACLE_HOME/bin/gsdctl stop >& /dev/null"; rc_status -v -r) test -x $ORACLE_HOME/oracm/bin/oracm && $CHECKPROC $ORACLE_HOME/oracm/bin/oracm && echo -n "Shutting down OCM: " && (killall oracm >& /dev/null; rm -f /etc/rac_on; rc_status -v -r) test -x /etc/init.d/init.crs && echo -n

&& echo -n "Turning off CRS:" && /etc/init.d/init.crs stop if [ "`$MOUNT -t ocfs | grep ocfs`" != "" ]; then echo -n "Unmounting all OCFS filesystems: " && ($UMOUNT -t ocfs -a; rc_status -v -r) fi if [ "$1" = "kill" ] then echo Killing all ORACLE processes

kill `ps aux | grep '^oracle' | awk '{print $2}'` sleep 10

kill -9 `ps aux | grep '^oracle' | awk '{print $2}'` fi

;; status) echo echo "#############################################################################" echo "# Begin of O R A C L E status section #" echo

Page 84: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 84 iSCSI and ASM shared media7

"#############################################################################" echo ora_environment status

echo "${extd}Kernel Parameters$norm" echo -n "Shared memory:" echo -n " SHMMAX=" `cat /proc/sys/kernel/shmmax` echo -n " SHMMNI=" `cat /proc/sys/kernel/shmmni` echo " SHMALL=" `cat /proc/sys/kernel/shmall` echo -n "Semaphore values:" echo " SEMMSL, SEMMNS, SEMOPM, SEMMNI: " `cat /proc/sys/kernel/sem` echo

if [ -x $ORACLE_HOME/bin/oracle ]; then echo "${extd}Database-Instances$norm" # loop over the instances (very simple !!!) IFS=: grep -v '^\(#\|$\)' /etc/oratab | while read sid ohome autostart ; do state=up su - $ORACLE_OWNER -c "export ORACLE_SID=$sid; sqlplus /nolog" <<-! 2>/dev/null | grep ORA-01034 >/dev/null && state=downconnect / as sysdbashow sga! echo "Instance $sid is $state \(autostart: $autostart\)" done echo fi

if [ -x $ORACLE_HOME/bin/lsnrctl ]; then state=up su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl status" | grep "[nN]o [lL]istener" >/dev/null && state=down echo "${extd}TNS-Listener:$norm $state" echo fi

numhttpd=`ps -e | grep httpd | wc -l | sed 's/ //g'` state=up if [ "$numhttpd" -lt 1 ] ; then state=down ; fi echo "${extd}Web-Server (Apache httpd):$norm $state ($numhttpd processes)" echo

if [ -x $ORACLE_HOME/bin/agentctl ]; then state=up su - $ORACLE_OWNER -c "agentctl status" | grep "Could not contact agent" >/dev/null && state=down echo "${extd}Intelligent Agent:$norm $state" echo fi

echo "${extd}Process list for user oracle:$norm" ps U oracle ps ax | grep oracm | grep -v grep ps axc | grep crs | grep -v grep ps axc | grep css | grep -v grep tail -10 $ORACLE_HOME/../crs_1/crs/log/* ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop $0 start ;; *) echo "Usage: $0 {start|stop|status|restart}" exit 1esac

echoecho "#############################################################################"echo "# End of O R A C L E section #"echo "#############################################################################"echo

# Global return value of this script is "success", always. We have too many

Page 85: Oracle 10 RAC installation on SLES9 x64 using NFS and ...ftp.portera.com/Linux/SLES9-ORA10RAC-iSCSI-ASM/INSTALL... · Web viewOracle Cluster Registry for cluster has already been

Oracle10RAC on SLES9 85 iSCSI and ASM shared media7

# individual return values...rc_status -r

rc_exit

APPENDIX 2. List of packages.Here is list of rpm’s, required for Oracle 10g for Linux x86-64.

Magnie Fabrizio provided this list. It is not official Oracle list.

glibc-devel-2.3.3-98.28glibc-2.3.3-98.28glibc-32bit-9-200407011233glibc-devel-32bit-9-200407011229compat-2004.7.1-1.2compat-32bit-9-200407011229XFree86-libs-4.3.99.902-43.22XFree86-libs-32bit-9-200407011229libaio-devel-0.3.98-18.4libaio-32bit-9-200407011229libaio-0.3.98-18.4libaio-devel-32bit-9-200407011229openmotif-libs-2.2.2-519.1openmotif-2.2.2-519.1

Openmotif is needed only for 10g.The version is the default of a SLES9 without SP1.

This document made available as a courtesy to partner agreement with Novell.