30
document.doc Page 1 of 30 10g RAC on Linux x86_64 Installation Oracle 10.2.0.2 Confidential - For Internal Use Only

10gR2 RAC Linux x86 64 Installation

Embed Size (px)

Citation preview

Page 1: 10gR2 RAC Linux x86 64 Installation

document.doc Page 1 of 25

10g RAC on Linux x86_64 Installation

Oracle 10.2.0.22006 – Q3

Confidential - For Internal Use Only

Page 2: 10gR2 RAC Linux x86 64 Installation

Table of Contents

1. PRE-INSTALLATION 32. ORACLE CLUSTERWARE (FORMERLY CRS) AND ASM 12PRE-INSTALL OF CLUSTERWARE FILES (OCR AND VOTING DISK) 12PRE-INSTALL OF DATABASE FILES FOR ASM (AUTOMATIC STORAGE MANAGEMENT) 13CONFIGURE THE DISK DEVICES TO USE THE ASM LIBRARY DRIVER 14INSTALL ORACLE CLUSTERWARE 14POST-INSTALLATION ADMINISTRATION INFO 153. ORACLE DATABASE 10G WITH RAC – SOFTWARE (BINARIES) 17PRE-INSTALL NOTES 17INSTALL 174. PATCH ORACLE DATABASE SOFTWARE 18DOWNLOAD AND INSTALL PATCHES 18FIX AN INSTALL BUG (5117016) 18FIX A PERMISSION BUG (PATCH 5087548) 195. RAC DATABASE USING THE DBCA WITH ASM 20PRE-INSTALL 20INSTALL 206. POST-INSTALLATION TASKS 227. ORACLE FILES 23LOCAL FILES 23SHARED ORACLE DATABASE FILES 23Shared Database Files for the Application 23

document.doc Page 2 of 25

Page 3: 10gR2 RAC Linux x86 64 Installation

1. Pre-InstallationSource: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/prelinux.htm#sthref133

1. Required Software Redhat Enterprise Linux 3 AS Update 3 (kernel 2.4.21-20) x86_64

o uname –ro cat /etc/redhat-releaseo cat /etc/issue

Oracle Database Enterprise Edition 10.2 x86_64 Oracle Clusterware 10.2 x86_64 Oracle Patch 10.2.0.2 (p4547817_10202_Linux-x86-64) Oracle Permissions Patch (p5087548_10202_Linux-x86-64) Oracle Critical Patch Update Jul 2006 (p5225799_10202_Linux-x86-64) Oracle ASMLib 2.0 Oracle Cluster Verification Utility 1.0 Oracle Client 10.2.x

2. Minimum hardware requirements for each RAC node 1 GB of physical RAM

o cat /proc/meminfo | grep MemTotal 1.5 GB of swap space (or the same size as RAM)

o cat /proc/meminfo | grep SwapTotal 400 MB of disk space in the /tmp directory

o df -h /tmp Up to 4 GB of disk space for the Oracle software Optional: 1.2 GB of disk space for a preconfigured database that uses file

system storage Shared Database Disk: 2TB usable, 33G LUNs, RAID 1+0

o /sbin/fdisk -l3. Networking hardware requirements

Each node must have at least two network adapters; one for the public network interface and one for the private network interface (the RAC interconnect).

The interface names associated with the network adapters for each network must be the same on all nodes

For increased reliability, you can configure redundant public and private network adapters for each node.

For the public network, each network adapter must support TCP/IP. For the private network, the interconnect must support the user datagram

protocol (UDP) using high-speed network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended).

document.doc Page 3 of 25

Page 4: 10gR2 RAC Linux x86 64 Installation

UDP is the default interconnect protocol for RAC and TCP is the interconnect protocol for Oracle CRS.

4. IP Address requirements for each RAC node An IP address and an associated host name registered in the domain name

service (DNS) for each public network interface. If you do not have an available DNS, then record the network name and IP address in the system hosts file, /etc/hosts.

One unused virtual IP address and an associated virtual host name registered in DNS that you will configure for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, you can configure clients to use the virtual host name or IP address. If a node fails, its virtual IP address fails over to another node.

A private IP address and optional host name for each private interface. Oracle recommends that you use non-routable IP addresses for the private interfaces, for example: 10.*.*.* or 192.168.*.*. You can use the /etc/hosts file on each node to associate private host names with private IP addresses.

o cat /etc/hosts

o /sbin/ifconfig –a

Example:

Node Interface Name Type IP Address Registered In

rac1 rac1 Public 143.46.43.100 DNS (if available, else the hosts file)

rac1 rac1-vip Virtual 143.46.43.104 DNS (if available, else the hosts file)

rac1 rac1-priv Private 10.0.0.1 Hosts file

rac2 rac2 Public 143.46.43.101 DNS (if available, else the hosts file)

rac2 rac2-vip Virtual 143.46.43.105 DNS (if available, else the hosts file)

rac2 rac2-priv Private 10.0.0.2 Hosts file

5. Linux x86 software requirements To see installed packages

o rpm –qao rpm -q kernel --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n"

o rpm -q <package_name>

Item Requirement

Operating systems x86 Red Hat Enterprise Linux AS/ES 3 (Update 4 or later)

document.doc Page 4 of 25

Page 5: 10gR2 RAC Linux x86 64 Installation

Item Requirement

(64-bit) Red Hat Enterprise Linux AS/ES 4 (Update 1 or later)

SUSE Linux Enterprise Server 9 (Service Pack 2 or later)

Kernel version x86 (64-bit)

The system must be running one of the following kernel versions (or a later version):

Red Hat Enterprise Linux 3 (Update 4):

2.4.21-27.EL

Note: This is the default kernel version.

Red Hat Enterprise Linux 4 (Update 1):

2.6.9-11.EL

SUSE Linux Enterprise Server 9 (Service Pack 2):

2.6.5-7.201

Red Hat Enterprise Linux 3 (Update 4) Packages

The following packages (or later versions) must be installed:

make-3.79.1-17compat-db 4.0.14-5.1control-center-2.2.0.1-13gcc-3.2.3-47gcc-c++-3.2.3-47gdb-6.1post-1.20040607.52glibc-2.3.2-95.30glibc-common-2.3.2-95.30glibc-devel-2.3.2-95.30glibc-devel-2.3.2-95.20 (32 bit)glibc-devel-2.3.4-2.13.i386 (32-bit)compat-db-4.0.14-5compat-gcc-7.3-2.96.128compat-gcc-c++-7.3-2.96.128compat-libstdc++-7.3-2.96.128compat-libstdc++-devel-7.3-2.96.128gnome-libs-1.4.1.2.90-34.2 (32 bit)libstdc++-3.2.3-47libstdc++-devel-3.2.3-47openmotif-2.2.3-3.RHEL3

document.doc Page 5 of 25

Page 6: 10gR2 RAC Linux x86 64 Installation

Item Requirement

sysstat-5.0.5-5.rhel3setarch-1.3-1libaio-0.3.96-3libaio-devel-0.3.96-3

Note: XDK is not supported with gcc on Red Hat Enterprise Linux 3.

Red Hat Enterprise Linux 4 (Update 1):Packages

The following packages (or later versions) must be installed:

binutils-2.15.92.0.2-10.EL4binutils-2.15.92.0.2-13.0.0.0.2.x86_64compat-db-4.1.25-9control-center-2.8.0-12gcc-3.4.3-9.EL4gcc-c++-3.4.3-9.EL4glibc-2.3.4-2glibc-common-2.3.4-2gnome-libs-1.4.1.2.90-44.1libstdc++-3.4.3-9.EL4libstdc++-devel-3.4.3-9.EL4make-3.80-5

Note: XDK is not supported with gcc on Red Hat Enterprise Linux 4.

SUSE Linux Enterprise Server 9 Packages

The following packages (or later versions) must be installed:

binutils-2.15.90.0.1.1-32.5gcc-3.3.3-43.24gcc-c++-3.3.3-43.24glibc-2.3.3-98.28gnome-libs-1.4.1.7-671.1libstdc++-3.3.3-43.24libstdc++-devel-3.3.3-43.24make-3.80-184.1

PL/SQL native compilation, Pro*C/C++, Oracle Call Interface, Oracle C++ Call Interface, Oracle XML Developer's Kit (XDK)

Intel C++ Compiler 8.1 or later and the version of GNU C and C++ compilers listed previously for the distribution are supported for use with these products.

Note: Intel C++ Compiler v8.1 or later is supported. However, it is not required for installation.

On Red Hat Enterprise Linux 3, Oracle C++ Call Interface

document.doc Page 6 of 25

Page 7: 10gR2 RAC Linux x86 64 Installation

Item Requirement

(OCCI) is supported with version 2.2 of the GNU C++ compiler. This is the default compiler version. OCCI is also supported with Intel Compiler v8.1 with gcc 3.2.3 standard template libraries.

On Red Hat Enterprise Linux 4.0, OCCI does not support GCC 3.4.3. To use OCCI on Red Hat Enterprise Linux 4.0, you need to install GCC 3.2.3.

Oracle XML Developer's Kit is not supported with GCC on Red Hat Linux 4.0. It is supported only with Intel C++ Compiler (ICC).

Oracle JDBC/OCI Drivers

You can use the following optional JDK versions with the Oracle JDBC/OCI drivers; however, they are not required for the installation:

Sun JDK 1.5.0 (64-bit) Sun JDK 1.5.0 (32-bit)

Sun JDK 1.4.2_09 (32-bit)

Oracle Real Application Clusters

For a cluster file system, use one of the following options:

Red Hat 3: Oracle Cluster File System (OCFS)

Version 1.0.13-1 or later

OCFS requires the following kernel packages:

ocfs-supportocfs-toolsocfs-kernel_version

In the preceding list, the variable kernel_version represents the kernel version of the operating system on which you are installing OCFS.

Note: OCFS is required only if you want to use a cluster file system for database file storage. If you want to use Automatic Storage Management or raw devices for database file storage, then you do not need to install OCFS.

document.doc Page 7 of 25

Page 8: 10gR2 RAC Linux x86 64 Installation

Item Requirement

Obtain OCFS kernel packages, installation instructions, and additional information about OCFS from the following URL:

http://oss.oracle.com/projects/ocfs/

Red Hat 4: Oracle Cluster File System 2 (OCFS2)

Version 1.0.1-1 or later

For information about Oracle Cluster File System version 2, refer to the following Web site:

http://oss.oracle.com/projects/ocfs2/

For OCFS2 certification status, refer to the Certify page on OracleMetaLink.

SUSE 9: Oracle Cluster File System 2 (OCFS2)

OCFS2 is bundled with SuSE Linux Enterprise Server 9, Service Pack 2 or higher.

If you are running SUSE9, then ensure that you are upgraded to the latest kernel (Service Pack 2 or higher), and ensure that you have installed the packages ocfs2-tools and ocfs2console.

For OCFS2 certification status, refer to the Certify page on OracleMetaLink.

6. Additional RAC specific software requirements See ASMLib downloads at:

http://www.oracle.com/technology/software/tech/linux/asmlib/rhel3.html

Real Application Clusters ASMLIB 2.0 for Red Hat 3.0 AS

Library and Tools oracleasm-support-2.0.3-1.x86_64.rpm oracleasmlib-2.0.2-1.x86_64.rpm

Driver for kernel 2.4.21-40.EL

document.doc Page 8 of 25

Page 9: 10gR2 RAC Linux x86 64 Installation

oracleasm-2.4.21-40.ELsmp-1.0.4-1.x86_64.rpm

7. Create the Linux groups and users on each RAC nodeo dba group (/usr/sbin/groupadd dba)o oinstall (/usr/sbin/groupadd oinstall)o oracle user (/usr/sbin/useradd -G dba oracle)o nobody user

The Oracle software owner user and the Oracle Inventory, OSDBA, and OSOPER groups must exist and be identical on all cluster nodes. To create these identical users and groups, you must identify the user ID and group IDs assigned them on the node where you created them, then create the user and groups with the same name and ID on the other cluster nodes

8. Configure SSH on each RAC node Login as oracle Create the .ssh directory in oracle’s home directory (then chmod 700 .ssh) Generate an RSA key for version 2 of the SSH protocol

/usr/bin/ssh-keygen -t rsa Generate a DSA key for version 2 of the SSH protocol

/usr/bin/ssh-keygen -t dsa Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to

the ~/.ssh/authorized_keys file for all nodes and share ~/.ssh/authorized_keys to all cluster nodes

chmod 644 ~/.ssh/authorized_keys Enable the Installer to use the ssh and scp commands without being

prompted for a pass phraseo exec /usr/bin/ssh-agent $SHELLo /usr/bin/ssh-addo At the prompts, enter the pass phrase for each key that you

generatedo Test ssh connections and confirm authenticity message

Also test ssh and confirm the authenticity message back to the node you are working on. Example: If you are on node1, ssh to node1.

Ensure that X11 forwarding will not cause the installation to failo Edit or create the ~oracle/.ssh/config as followso Host *

ForwardX11 no If necessary, start required X emulation software on the client Test: /usr/X11R6/bin/xclock

9. Configure kernel parameters on each RAC node Values should be equal or greater than those in the following table on all

nodes (/etc/sysctl.conf)o /sbin/sysctl -a | grep sem o /sbin/sysctl -a | grep shmo /sbin/sysctl -a | grep file-maxo /sbin/sysctl -a | grep ip_local_port_range

document.doc Page 9 of 25

Page 10: 10gR2 RAC Linux x86 64 Installation

o /sbin/sysctl -a | grep net.core

Parameter Value File

semmsl semmns semopm semmni

250 32000 100 128 /proc/sys/kernel/sem

Shmmax Half the size of physical memory (in bytes)

/proc/sys/kernel/shmmax

Shmmni 4096 /proc/sys/kernel/shmmni

Shmall 2097152 /proc/sys/kernel/shmall

file-max 65536 /proc/sys/fs/file-max

ip_local_port_range Minimum: 1024

Maximum: 65000

/proc/sys/net/ipv4/ip_local_port_range

rmem_default 262144 /proc/sys/net/core/rmem_default

rmem_max 262144 /proc/sys/net/core/rmem_max

wmem_default 262144 /proc/sys/net/core/wmem_default

wmem_max 262144 /proc/sys/net/core/wmem_max

To change values:o Edit with values belowo Once edited, execute /sbin/sysctl -p to apply changes manuallykernel.shmall = 2097152kernel.shmmax = 2147483648kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000net.core.rmem_default = 1048576net.core.rmem_max = 1048576net.core.wmem_default = 262144net.core.wmem_max = 262144

10. Set shell limits for the oracle user on all nodes to improve performance Add the following lines to /etc/security/limits.conf file

oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536

Add or edit the following line in the /etc/pam.d/login filesession required /lib/security/pam_limits.so

Edit /etc/profile with the following

document.doc Page 10 of 25

Page 11: 10gR2 RAC Linux x86 64 Installation

if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi

11. Create Oracle software directories on each node Oracle Base

ex. /u01/app/oracleo Minimum 3G available disk space# mkdir -p /u01/app/oracle# chown -R oracle:oinstall /u01/app/oracle# chmod -R 775 /u01/app/oracle

Oracle Cluster Ready Servicesex. /u01/crs/oracle/product/10/crs

o Should not be a subdirectory of the Oracle Base directoryo Minimum 1G available disk space# mkdir -p /u01/crs/oracle/product/10/crs# chown -R oracle:oinstall /u01/crs# chmod -R 775 /u01/crs

Note, the Oracle Home directory will be created by the OUIo Oracle Home directories will be listed in /etc/oratab

12. Oracle database files and Oracle database recovery files (if utilized) must reside on shared storage:

ASM: Automatic Storage Management NFS file system (requires a NAS device) Shared Raw Partitions

13. The Oracle Cluster Registry and Voting disk files must reside on shared storage, but not on ASM. You cannot use Automatic Storage Management to store OCR or Voting disk files because these files must be accessible before any Oracle instance starts.

These files MUST be raw files on shared storage. Files:o ora_ocr: 100M each, 2 for redundancyo ora_vote: 20M each, 3 for redundancy

Clustered sharing of these raw files will be handled by CRS. Third party clusterware is not required.

14. Configure the oracle user’s environment PATH

o In PATH: $ORACLE_HOME/bin before /usr/X11R6/bin ORACLE_BASE (ex. /u01/app/oracle) ORACLE_HOME (ex. $ORACLE_BASE/product/<version>) ORA_CRS_HOME (ex. $ORACLE_BASE/crs) DISPLAY umask 022 Test X emulator

document.doc Page 11 of 25

Page 12: 10gR2 RAC Linux x86 64 Installation

15. Ensure a switch resides on the network between the nodes16. For Oracle Clusterware (CRS) on x86 64-bit, you must run rootpre.sh

Loaded with Clusterware software17. Oracle Database 10g installation requires you to perform a two-phase process in

which you run Oracle Universal Installer (OUI) twice. The first phase installs Oracle Clusterware 10g Release 2 (10.2) and the second phase installs the Oracle Database 10g software with RAC. These steps are documented below.

document.doc Page 12 of 25

Page 13: 10gR2 RAC Linux x86 64 Installation

2. Oracle Clusterware (formerly CRS) and ASMSource: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#sthref666

1. Verify user equivalence by testing ssh to all nodes ssh may need to be in /usr/local/bin/.

o Softlinks may need to be created for ssh and scp in /usr/local/bin/.2. IP Addresses: In addition to the host machine's public internet protocol (IP)

address, obtain two more IP addresses for each node Both nodes require a separate public IP address for the node's Virtual IP

address (VIP). Oracle uses VIPs for client-to-database connections. Therefore, the VIP address must be publicly accessible.

3. The third address for each node must be a private IP address for inter-node, or instance-to-instance Cache Fusion traffic. Using public interfaces for Cache Fusion can cause performance problems.

4. Oracle Clusterware should be installed in a separate home directory. You should not install Oracle Clusterware in a release-specific Oracle home mount point.

Pre-Install of Clusterware files (OCR and Voting Disk)http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#BABCEDJB

5. Oracle Clusterware files to be installed are: Oracle Cluster Registry (OCR): 100M: ora_ocr CRS Voting Disk: 20M: ora_vote

6. The CRS files listed above must be on shared storage (OCFS, NFS, or raw) and bound and visible to all nodes.

You cannot use Automatic Storage Management to store Oracle CRS files, because these files must be accessible before any Oracle instance starts.

7. If using raw, do the following on all nodes as root To identify device names: /sbin/fdisk –l

o devicename examples: /dev/sdv OR /dev/emcpowera Create (raw) partitions: /sbin/fdisk <devicename>

o Use the “p” command to list the partition table of the device.o Use the “n” command to create a partition.o After creating required partitions on this device, use the “w”

command to write the modified partition table to the device Bind partitions to the raw devices

o See what devices are already bound: /usr/bin/raw -qao Add a line to /etc/sysconfig/rawdevices for each partition created:

/dev/raw/raw1 </path/partition_name>o For the raw device created

chown root:dba /dev/raw/raw1 chmod 640 /dev/raw/raw1

o To bind the partitions to the raw devices, enter the following command:

/sbin/service rawdevices restart

document.doc Page 13 of 25

Page 14: 10gR2 RAC Linux x86 64 Installation

Pre-Install of Database files for ASM (Automatic Storage Management)http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#sthref838

8. Determine how many devices and free disk space required Determine space needed for Database files Determine space needed for recovery files (optional)

9. ASM redundancy level: determines how ASM mirrors, the number of disks needed for mirroring, and amount of disk space needed.

External Redundancy: ASM does not mirror Normal Redundancy: Two-way ASM mirroring. Minimum of 2 disks are

required. Useable disk space is 1/2 the sum of the disk space. High Redundancy: Three-way ASM mirroring. Minimum of 3 disks are

required. Useable disk space is 1/3 the sum of the disk space.10. ASM metadata requires additional disk space. Use the following calculation to

determine space in megabytes: 15 + (2 * number_of_disks) + (126 * number_of_ASM_instances)

11. Failure groups for ASM disk group devices: optional. Associating a set of disk devices in a custom failure group.

Only available in Normal or High redundancy level12. Guidelines for disk devices and disk groups

All devices in an ASM disk group should be the same size and have the same performance characteristics

Do not specify more than one partition on a single physical disk as a disk group device. ASM expects each disk group device to be on a separate physical disk.

Although you can specify a logical volume as a device in an ASM disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing ASM from optimizing I/O across the physical devices.

13. If necessary, download the required ASMLIB packages from the OTN Web site: http://www.oracle.com/technology/software/tech/linux/asmlib/rhel3.html

14. Install the following three packages on all nodes, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using:

oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm

15. On all nodes, install the packages as root: rpm -Uvh oracleasm-support-version.arch.rpm \

oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm

check kernel modules: /sbin/modprobe -v oracleasm16. Run the oracleasm initialization script as root on all nodes:

/etc/init.d/oracleasm configure When requested, select owner (oracle), group (dba), and start on boot (y)

document.doc Page 14 of 25

Page 15: 10gR2 RAC Linux x86 64 Installation

Configure the Disk Devices to Use the ASM Library Driver17. Install or configure the shared disk devices that you intend to use for the disk

group(s) and restart the system.18. Identify the device name for the disks: /sbin/fdisk -l19. Use either fdisk (or parted) to create a single whole-disk partition on the disk

devices that you want to use. On Linux systems, Oracle recommends that you create a single whole-disk

partition on each disk To identify device names: /sbin/fdisk –l

o devicename examples: /dev/sdv OR /dev/emcpowera Create (raw) partitions: /sbin/fdisk <devicename>

o Use the “p” command to list the partition table of the device.o Use the “n” command to create a partition.o After creating required partitions on this device, use the “w”

command to write the modified partition table to the device20. Mark the disk(s) as a ASM disk(s). As root run:

/etc/init.d/oracleasm createdisk DISK1 /dev/sdb1 /etc/init.d/oracleasm createdisk DISK2 /dev/sda1 Where DISK1 and DISK2 are the name you want to assign to the disks. It

MUST start with an uppercase letter.21. On each node, to make the disk available on the other cluster nodes, enter the

following command as root : /etc/init.d/oracleasm scandisks

22. On each node confirm disks /etc/init.d/oracleasm listdisks

23. If you are using EMC Powerpath on Red Hat 3 add the following line to the /etc/sysconfig/oracleasm

ORACLEASM_SCANEXCLUDE="emcpower"

Install Oracle Clusterware24. During the installation, hidden files on the system (for example, .bashrc or .cshrc)

will cause installation errors if they contain stty commands. you must modify these files to suppress all output on STDERR, as in the following examples:

Bourne, Bash, or Korn shell:if [ -t 0 ]; then stty intr ^Cfi

C shell:test -t 0if ($status == 0) then stty intr ^Cendif

25. As root, run rootpre.sh which is located in the ../clusterware/rootpre directory on the Oracle Database 10g Release 2 (10.2) installation media.

document.doc Page 15 of 25

Page 16: 10gR2 RAC Linux x86 64 Installation

26. Using an X Windows emulator, start the runInstaller command from the clusterware directory on the Oracle Database 10g Release 2 (10.2) installation media.

/mountpoint/clusterware/runInstaller

When OUI displays the Welcome page, click Next. On the “Specify Home Details” page, remember, the Clusterware home

CANNOT be the same as the ORACLE_HOME.27. When the OUI is complete, Run orainstRoot.sh and root.sh on all the nodes when

requested28. Without user intervention, OUI runs

Oracle Notification Server Configuration Assistant Oracle Private Interconnect Configuration Assistant, Cluster Verification Utility (CVU). These programs run without user

intervention.29. If the CVU fails because of a missing VIP, this could be just because all of the IP

Addresses are incorrectly considered private by Oracle (because they begin with 172.16.x.x - 172.31.x.x, 192.168.x.x, or 10.x.x.). In a separate window as root run the vipca manually.

DO NOT exit the OUI As root, launch VIPCA (ex: /apps/crs/oracle/product/10.2/crs/bin/vipca) Enter the VIP node names and IP address for every node. Exit VIPCA Back in the OUI, Retry the Cluster Verification Utility

Post-Installation Administration info30. init.crs: should have been added to server boot scripts to stop/start CRS31. The following are the CRS (CSS) background processes that must be running for

CRS to function. These are stopped and started with init.crs: evmd -- Event manager daemon that starts the racgevt process to manage

callouts. ocssd -- Manages cluster node membership and runs as oracle user; failure

of this process results in cluster restart. crsd -- Performs high availability recovery and management operations

such as maintaining the OCR. Also manages application resources and runs as root user and restarts automatically upon failure.

32. To administer the ASM library driver and disks, use the oracleasm initialization script (used in the previous steps) with different options, as follows:

/etc/init.d/oracleasm configureo To reconfigure the ASM library driver

/etc/init.d/oracleasm enable OR disableo Change the behavior of the ASM library driver when the system

boots. The enable option causes the ASM library driver to load when the system boots

/etc/init.d/oracleasm restart OR stop OR start

document.doc Page 16 of 25

Page 17: 10gR2 RAC Linux x86 64 Installation

o Load or unload the ASM library driver without restarting the system

/etc/init.d/oracleasm createdisk DISKNAME devicenameo Mark a disk device for use with the ASM library driver and give it

a name /etc/init.d/oracleasm deletedisk DISKNAME

o To unmark a named disk device. You must drop the disk from the ASM disk group before you unmark it..

/etc/init.d/oracleasm querydisk {DISKNAME | devicename}o to determine whether a disk device or disk name is being used by

the ASM library driver. /etc/init.d/oracleasm listsdisks

o To list the disk names of marked ASM library driver disks /etc/init.d/oracleasm scandisks

o To enable cluster nodes to identify which shared disks have been marked as ASM library driver disks on another node.

document.doc Page 17 of 25

Page 18: 10gR2 RAC Linux x86 64 Installation

3. Oracle Database 10g with RAC – Software (binaries)Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/racinstl.htm#sthref1048

Pre-Install Notes1. The Oracle home that you create for installing Oracle Database 10g with the RAC

software cannot be the same Oracle home that you used during the CRS installation

2. During the installation, unless you are placing your Oracle home on a clustered file system, the OUI copies software to the local node and then copies the software to the remote nodes. On UNIX-based systems, the OUI then prompts you to run the root.sh script on all the selected nodes.

Install3. Using an X Windows emulator, start the runInstaller command from the database

directory on the Oracle Database 10g Release 2 (10.2) installation media. /mountpoint/database/runInstaller Execute a normal Oracle install except where noted below.

4. Ensure the OUI is cluster aware After the Specify Home Details page you should see the Specify Hardware

Cluster Installation Mode pages. If you do not, the OUI is not cluster aware and will not install components

required to run RAC. View the OUI log in <oraInventory>/logs/. for install details

5. On the Select Configuration Option page, select “Install Database Software only”6. Complete Install

document.doc Page 18 of 25

Page 19: 10gR2 RAC Linux x86 64 Installation

4. Patch Oracle Database SoftwareSource: Oracle Metalink - http://metalink.oracle.com

At this time, 10.2.0.2 is the latest GA version for Linux x86. The Oracle CD pack used for this install is 10.2.0.1.

Download and Install PatchesRefer to the OracleMetaLink Web site for required patches for your installation and todownload required patches:

1. Use a Web browser to view the OracleMetaLink Web site: http://metalink.oracle.com

2. Log in to OracleMetaLink.3. On the main OracleMetaLink page click Patches & Updates tab.4. Click Simple Search link, then Advanced Search button.5. On the Advanced Search page click the search icon next to the Product or Product

Family field.6. In the Search and Select: Product Family field, enter RDBMS Server in the For

field and click Go.7. Select RDBMS Server under the Results heading and click Select. RDBMS

Server appears in the Product or Product Family field and the current release appears in the Release field.

8. Select your platform from the list in the Platform field and click Go.9. Any available patches appear under the Results heading.10. Click the number of the patch that you want to download.11. On the Patch Set page, click View README and read the page that appears. The

README page contains information about the patch set and how to apply the patches to your installation.

12. Return to the Patch Set page, click Download, and save the file on your system.13. Use the unzip utility provided with Oracle Database 10g to uncompress the Oracle

patches that you downloaded from OracleMetaLink. the unzip utility is located in the $ORACLE_HOME/bin directory.

Fix an install bug (5117016)14. cd /apps/oracle/10.2/rdbms/lib/ -- we need to make a copy15. cp libserver10.a libserver10.a.base_cpOHRDBMSLIB16. cd /apps/oracle/10.2/lib17. mv libserver10.a libserver10.a.base_cpOHLIB18. mv /apps/oracle/10.2/rdbms/lib/libserver10.a .19. ls -al $ORACLE_HOME/bin/oracle*20. relink oracle21. ls -al $ORACLE_HOME/bin/oracle*22. If oracle does not relink stop and contact support

document.doc Page 19 of 25

Page 20: 10gR2 RAC Linux x86 64 Installation

Fix a Permission bug (patch 5087548)23. Transfer the 10.2 patch file to the server24. Unzip the patch file: unzip <filename>25. Set 10g oracle environment26. cd 5087548/27. Run OPatch: /apps/oracle/10.2/OPatch/opatch apply28. cd $ORACLE_HOME/install29. . ./changePerm.sh30. Hit 'y' and enter

-- permission changes should take around 10-15 minutes -- *** if the script hangs then exit the window/job -- verify permission changes in /apps/oracle/10.2/bin/ -- most permissions (sqlplus) should show: -rwxr-xr-x

document.doc Page 20 of 25

Page 21: 10gR2 RAC Linux x86 64 Installation

5. RAC Database using the DBCA with ASMSource: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/dbcacrea.htm#sthref1091

Pre-Install1. Database creation requirements of the ASM Library Driver

You must use Database Configuration Assistant (DBCA) in interactive mode to create the database. You can run DBCA in interactive mode by choosing the Custom installation type or the Advanced database configuration option.

You must also change the default disk discovery string to ORCL:*.

Install2. Run CVU to verify that your system is prepared to create Oracle Database with

RAC /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n node_list -d

oracle_home [-verbose] Example: /dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n

node1,node2 -d /oracle/product/10.2.0/3. Start the DBCA using an X Windows emulator: $ORACLE_HOME/bin/dbca

Execute a normal Oracle install except where noted below.4. Ensure the DBCA is cluster aware

The first page should be the Welcome page for RAC. If not, the DBCA is not cluster aware To diagnose

o Run the CVU: /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -post crsinst -n nodename

o Run olsnodes5. When asked “Select the operation that you want to perform”, choose Create a

Database6. Select the Custom Database template to manually define datafiles and options 7. If you choose to manage the RAC database with Enterprise Manager, you can also

choose on of the following Grid Control Database Control

8. On the Storage Options page The Cluster File System option is the default. Change to ASM

9. For ASM, you will need to create an ASM instance (if one does not already exist). You will be taken to the ASM Instance Creation page

Unless $ORACLE_HOME/dbs/. is a shared filesystem, you will not be able to create an SPFILE. Use an IFILE.

Let ASM create a listener if prompted.10. On the ASM Disk Group page:

document.doc Page 21 of 25

Page 22: 10gR2 RAC Linux x86 64 Installation

Click the “Create New” button. The disk groups configured above in the ASM Library Driver install should appear.

On the Create Disk Group page, your ASM disk(s) should appear. If not, exit the DBCA and restart.

At the top, choose a Disk Group Name Choose your redundancy level (external) Then check disks to belong to the Disk Group, click OK

11. On the Recovery Configuration page, for Cluster File System, the optional flash recovery area defaults to $ORACLE_BASE/flash_recovery_area

12. Following remaining steps in a typical database creation.13. Before creating the database, choose Generate Database Creation Scripts

document.doc Page 22 of 25

Page 23: 10gR2 RAC Linux x86 64 Installation

6. Post-Installation TasksSource: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/postinst.htm#sthref1144

1. Ensure NETCA has run to configure Oracle Networking components.2. Backup the voting disk

Also make a backup of the voting disk after adding or removing a node

document.doc Page 23 of 25

Page 24: 10gR2 RAC Linux x86 64 Installation

7. Oracle FilesCreation of these files is not necessary when using ASM. They are listed here to assist with Database planning and sizing.

Local FilesThese files are local to each node and do not need to be OCFS or ASM files. - archived redo logs - init file

Shared Oracle Database Files These files (except ora_ocr and ora_vote) may live in ASM disk groups.

- ora_ocr 100M raw file for CRS cluster registry - ora_vote 20M raw file for CRS voting disk - controlfile_01 500M - controlfile_02 500M - system_01 1G - system_02 1G - sysaux_01 800M 300M + 250M for each instance - sysaux_02 800M - srvcfg_01 500M optional (for server management file) - sp_file_01 100M optional (for server parameter file) - example_01 200M optional - cwmlite_01 200M optional - xdb_01 100M optional - odm_01 300M optional - indx_01 100M optional - tools_01 500M - drsys_01 500M optional (for intermedia) - drsys_02 500M optional (for intermedia) - snaplogs_01 2G optional (for replication) - users_01 500M - temp_01 2G - temp_02 2G (for default temp TS switching) - undo_i1_01 2G - undo_i1_02 2G - undo_i1_03 2G (for undo TS switching) - undo_i1_04 2G (for undo TS switching) - undo_i2_01 2G - undo_i2_02 2G - undo_i2_03 2G (for undo TS switching) - undo_i2_04 2G (for undo TS switching) - redo_i1_01 100M - redo_i1_02 100M - redo_i1_03 100M - redo_i1_04 100M - redo_i1_05 100M - redo_i1_06 100M (for high trans. growth) - redo_i1_07 100M (for high trans. growth) - redo_i1_08 100M (for high trans. growth) - redo_i1_09 100M (for high trans. growth) - redo_i1_10 100M (for high trans. growth) - redo_i2_01 100M - redo_i2_02 100M - redo_i2_03 100M - redo_i2_04 100M - redo_i2_05 100M - redo_i2_06 100M (for high trans. growth) - redo_i2_07 100M (for high trans. growth) - redo_i2_08 100M (for high trans. growth) - redo_i2_09 100M (for high trans. growth) - redo_i2_10 100M (for high trans)

document.doc Page 24 of 25

Page 25: 10gR2 RAC Linux x86 64 Installation

Shared Database Files for the Application<list files here>

document.doc Page 25 of 25