133
Oracle cluster on CentOS using CentOS clusterware Introduction This document will cover the setup/configuration of CentOS cluster and Oracle 11g database as a cluster resource. Download Software and assumptions 1. CentOS 5.6 64bit 2. Oracle Database 11g R1 binaries 3. You should have internet access for yum install 4. ILO cards should have been configured for fencing 5. Both CentOS nodes should communicate with each other with: 1 public interface (bonding is suggested) 1 private interconnect (bonding is suggested) SAN storage LUNs should be accessible to both nodes, as cluster storage. Modify /etc/hosts in both nodes /etc/hosts file in each server should be modified to point to ilos, private interfaces and public interfaces. See the example bellow. [root@db2 ~]# more /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 ## comment this line 192.168.0.70 db1.dyndns.org ## This is the public hostname for db1 192.168.2.70 db1 ## This is the private hostname for db1 192.168.0.80 db2.dyndns.org ## This is the public hostname for db2 192.168.2.50 db2 ## This is the private hostname for db2

Oracle Cluster on CentOS Using CentOS Cluster Ware

Embed Size (px)

DESCRIPTION

Installing CentOS cluster and Oracle on it non-RAC

Citation preview

Page 1: Oracle Cluster on CentOS Using CentOS Cluster Ware

Oracle cluster on CentOS using CentOS clusterware

Introduction

This document will cover the setup/configuration of CentOS cluster and Oracle 11g database as a cluster resource.

Download Software and assumptions

1. CentOS 5.6 64bit2. Oracle Database 11g R1 binaries3. You should have internet access for yum install4. ILO cards should have been configured for fencing5. Both CentOS nodes should communicate with each

other with: 1 public interface (bonding is suggested)1 private interconnect (bonding is suggested)

SAN storage LUNs should be accessible to both nodes, as cluster storage.

Modify /etc/hosts in both nodes

/etc/hosts file in each server should be modified to point to ilos, private interfaces and public interfaces. See the example bellow.

[root@db2 ~]# more /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

#::1 localhost6.localdomain6 localhost6 ## comment this line

192.168.0.70 db1.dyndns.org ## This is the public hostname for db1

192.168.2.70 db1 ## This is the private hostname for db1

192.168.0.80 db2.dyndns.org ## This is the public hostname for db2

192.168.2.50 db2 ## This is the private hostname for db2

# Cluster VIP for Oracle

192.168.0.90 oraprd.dyndns.org ##Cluster Virtual Host

192.168.0.151 ilodb1 ## Ilo1

192.168.0.152 ilodb2 ##Ilo 2

[root@db2 ~]#

Page 2: Oracle Cluster on CentOS Using CentOS Cluster Ware

Discovering the needed storage

Perform: fdisk –l to check the attached disks. Your storage admin should have assigned those disks to both nodes. A reboot of the servers maybe necessary to see the new disks.

lsscsi rpm installation in both nodes, must be performed.

Check lsscsi output in both nodes. Should display the same disks.

Page 3: Oracle Cluster on CentOS Using CentOS Cluster Ware

Before the Reboot, you may try the following command, to discover the disks LUNs online (should run in both nodes)

echo "- - -" > /sys/class/scsi_host/host3/scan

Where host3 can be replaced by the output of the command:

lsscsi –H (this command will display all fiber cards connected to your server)

[root@db11 ~]# lsscsi -H

[0] <NULL> This means that the fiber card has the instance 0, so it would be host0

You must have already opened the /var/log/messages in each node, from another putty session, because you will probably see the attached new devices there, the moment they are discovered by the previous command.

Create the disk layout (LVM) – This is to be done only in one node

Create 4 pvs. This configuration is enough for a simple Oracle database installation, with no heavy usage.

If a device, has previous partitions because of previous usage (e.g Solaris), then before the pv creation, you need to delete those partitions. Use fdisk command to connect to the device.

Page 4: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 5: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 6: Oracle Cluster on CentOS Using CentOS Cluster Ware

Create the volume Groups

Create vg_orabin, vg_oradata, vg_redo1, vg_redo2

Page 7: Oracle Cluster on CentOS Using CentOS Cluster Ware

Create the Logical Volumes

Create lv_orabin, lv_oradata, lv_redo1, lv_redo2

Page 8: Oracle Cluster on CentOS Using CentOS Cluster Ware

Create the file systems – If a logical volume is inactive, you cannot create the filesystem, so then you must try to activate with: lvchange –ay lv_name

Page 9: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 10: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 11: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 12: Oracle Cluster on CentOS Using CentOS Cluster Ware

Reboot the other node, to read all LVM changes (this is needed, according to my experience)

[root@db2 ~]# shutdown -ry now

After the reboot:

Page 13: Oracle Cluster on CentOS Using CentOS Cluster Ware

Install all cluster and cluster storage rpms (IN BOTH NODES)

Page 14: Oracle Cluster on CentOS Using CentOS Cluster Ware

This includes cman, rgmanager, clvmd, ricci, luci, system-config-cluster and lvm2-cluster

Page 15: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 16: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 17: Oracle Cluster on CentOS Using CentOS Cluster Ware

Start the cluster config gui (you may use luci web interface to build the cluster, you have to start ricci and luci services first)

Page 18: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Create New Configuration

Name the cluster, do not use quorum yet, click ok

Page 19: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 20: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Add a Cluster Node, put the private hostname (the one used for the interconnect), choose 1 vote, click Ok

Page 21: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click again on Add a Cluster Node, put the other node’s hostname (interconnect), choose 1 vote, click Ok

Page 22: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Failover Domains, Create a Failover Domain

Page 23: Oracle Cluster on CentOS Using CentOS Cluster Ware

Name the Domain as oracle_domain and click Ok

Page 24: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Available Cluster Nodes and choose both nodes

Page 25: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 26: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on both check boxes

Page 27: Oracle Cluster on CentOS Using CentOS Cluster Ware

And adjust priority (priority is the preferable node for the cluster service) to db1=1, db2=2, then click Close

Page 28: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 29: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Resources and then on Create Resource (4 times for file systems, 1 for VIP). Use the logical volumes created before. Usually we use also similar resource names for the mount points, as the name of the filesystem.

/orabin ORACLE_HOME

/oradata Datafiles+ControlFiles+Temporary

/redo1, /redo2 RedoLogs (The setup of oracle redologs –multiplex as it is called- has to be done later by the Oracle DBA and can be done online. For our installation all redolog files will be saved in /oradata/….)

/arch if database is to be configured in archive log mode, for online backups (not our case, Oracle DBA job also). If you know from the beginning of the project, that the database, will not be in archive log mode, then you do not need this file system at all)

Page 30: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 31: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 32: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 33: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 34: Oracle Cluster on CentOS Using CentOS Cluster Ware

Now Create the VIP IP resource and click Ok. The VIP should exist in /etc/hosts on both nodes. Check the monitor Link checkbox.

Page 35: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 36: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Services and then on Create a Service

Page 37: Oracle Cluster on CentOS Using CentOS Cluster Ware

Choose TESTDB as service_name and click Ok

Page 38: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 39: Oracle Cluster on CentOS Using CentOS Cluster Ware

Make sure the Autostart This Service and Restart options are checked. Click on Failover Domain and choose oracle_domain

Page 40: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Add a shared resource for this service (5 times, one for each resource). You must choose first the VIP and then the file systems

Page 41: Oracle Cluster on CentOS Using CentOS Cluster Ware

Now click on the VIP resource and then on the Attach a Shared Resource to the selection (4 times, one for each file system)

Page 42: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 43: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 44: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 45: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 46: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click Close button

Page 47: Oracle Cluster on CentOS Using CentOS Cluster Ware

Go to FileSave

Page 48: Oracle Cluster on CentOS Using CentOS Cluster Ware

Press Ok on the defaults

Page 49: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 50: Oracle Cluster on CentOS Using CentOS Cluster Ware

Close the system-config-cluster GUI

Enable automatic startup of cluster daemons (in both nodes db1 and db2)

Page 51: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 52: Oracle Cluster on CentOS Using CentOS Cluster Ware

Reboot both nodes and check (from the ILO console) that cluster is functional. When both systems are up, perform the command clustat to verify the cluster proper configuration.

Page 53: Oracle Cluster on CentOS Using CentOS Cluster Ware

Create the quorum configuration (with direct editing of the cluster.conf), on none db1. Then propagate to db2 with ccs_tool

Create a quorum device with mkqdisk. Chose an empty device (normally quorum devices should be less than 1 GB of capacity).

Page 54: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 55: Oracle Cluster on CentOS Using CentOS Cluster Ware

Verify the quorum on node2 also

Modify the cluster.conf on db1 with the three white shadow lines (add them under the second line)

Page 56: Oracle Cluster on CentOS Using CentOS Cluster Ware

Modify the two_node=1 and change it to two_node=0 (two_node=1 can be used when we have no quorum)

Page 57: Oracle Cluster on CentOS Using CentOS Cluster Ware

Change the version in cluster.conf

Page 58: Oracle Cluster on CentOS Using CentOS Cluster Ware

Save the cluster.conf and propagate to db2

Page 59: Oracle Cluster on CentOS Using CentOS Cluster Ware

Configure the ILO cards on both nodes. Connect to ILO with the Web interface and make sure that you have setup a fence user/password. The user should have the right to connect and reset the server. ILO cards should be connected to a private switch, accessible by the sys admin for maintenance. The following printscreens, shows ILO config on one node. The same config should be done on the other ILO

Page 60: Oracle Cluster on CentOS Using CentOS Cluster Ware

also.

Page 61: Oracle Cluster on CentOS Using CentOS Cluster Ware

Update cluster.conf on db1 with the ILO fence information. Insert the lines in white shadow. In our case, we used ilodb1 and ilodb2 for ILO hostnames, and we have updated /etc/hosts in both nodes with the ILO IPs.

Page 62: Oracle Cluster on CentOS Using CentOS Cluster Ware

Modify cluster.conf on db1 and add the lines in white shadow.

Page 63: Oracle Cluster on CentOS Using CentOS Cluster Ware

Change version

Page 64: Oracle Cluster on CentOS Using CentOS Cluster Ware

Save and propagate to db2

Page 65: Oracle Cluster on CentOS Using CentOS Cluster Ware

Test connectivity with ilodb1, ilodb2, db1, db2, VIP, GATEWAY from both nodes

[root@db1 ~]# ping ilodb1

PING ilodb1 (192.168.0.151) 56(84) bytes of data.

64 bytes from ilodb1 (192.168.0.151): icmp_seq=1 ttl=64 time=1.83 ms

64 bytes from ilodb1 (192.168.0.151): icmp_seq=2 ttl=64 time=0.580 ms

--- ilodb1 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.580/1.207/1.834/0.627 ms

[root@db1 ~]# ping ilodb2

PING ilodb2 (192.168.0.152) 56(84) bytes of data.

64 bytes from ilodb2 (192.168.0.152): icmp_seq=1 ttl=64 time=1.50 ms

64 bytes from ilodb2 (192.168.0.152): icmp_seq=2 ttl=64 time=0.511 ms

--- ilodb2 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

Page 66: Oracle Cluster on CentOS Using CentOS Cluster Ware

rtt min/avg/max/mdev = 0.511/1.006/1.501/0.495 ms

[root@db1 ~]# ping 192.168.0.1

PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.

64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.233 ms

--- 192.168.0.1 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms

[root@db1 ~]# ping db1

PING db1 (192.168.2.70) 56(84) bytes of data.

64 bytes from db1 (192.168.2.70): icmp_seq=1 ttl=64 time=0.063 ms

--- db1 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms

[root@db1 ~]# ping db2

PING db2 (192.168.2.50) 56(84) bytes of data.

64 bytes from db2 (192.168.2.50): icmp_seq=1 ttl=64 time=0.178 ms

--- db2 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms

[root@db1 ~]#

[root@db1 ~]# ping 192.168.0.90

PING 192.168.0.90 (192.168.0.90) 56(84) bytes of data.

64 bytes from 192.168.0.90: icmp_seq=1 ttl=64 time=0.065 ms

64 bytes from 192.168.0.90: icmp_seq=2 ttl=64 time=0.029 ms

--- 192.168.0.90 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.029/0.047/0.065/0.018 ms

Page 67: Oracle Cluster on CentOS Using CentOS Cluster Ware

Reboot both nodes, for the quorum device to start properly and to count in voting. qdiskd daemon should always start before the cman.

[root@db1 ~]# shutdown -ry now (reboot if it is going too slowly)

[root@db2 ~]# shutdown -ry now (reboot if it is going too slowly)

Note: Db1 and db2 are going to fence each other once, until they join to the cluster. Just wait. In some cases you may have to update /etc/cluster/cluster.conf with

<fence_daemon post_fail_delay="0" post_join_delay="120"/>

Check cluster status, after both systems are up and running.

Page 68: Oracle Cluster on CentOS Using CentOS Cluster Ware

Oracle installation steps

Create user groups oinstall and dba, then create user oracle. Make sure they share the same userId and groupIDs between both nodes. This has to be done in both cluster nodes.

Page 69: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 70: Oracle Cluster on CentOS Using CentOS Cluster Ware

Modify .bash_profile in /home/oracle and add the following info (in both nodes). /home/oracle is assumed to be the home directory of user oracle.

export ORACLE_HOME=/orabin/oracle/product/11.2.0/TESTDB

export ORACLE_BASE=/orabin/oracle

export ORACLE_SID=TESTDB

export PATH=$ORACLE_HOME/bin:$PATH

Modify /etc/sysctl.conf in both nodes, and add (at the end of the file):

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

kernel.sem = 250 32000 100 128

Perform sysctl –p for the kernel to re-read the file online (both nodes)

Modify /etc/security/limits.conf and add (both nodes, end of the file):

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

NOTE: The minimum swap space you need is 2GBs in both servers.

Change recursively the ownerships for all cluster file systems to belong to oracle.oinstall (both nodes)

Page 71: Oracle Cluster on CentOS Using CentOS Cluster Ware

[root@db2 /]# chown -R oracle.oinstall /orabin/ /oradata/ /redo1 /redo2

[root@db1 /]# chown -R oracle.oinstall /orabin/ /oradata/ /redo1 /redo2

Modify the /etc/profile and add the following (both nodes, end of file) lines (shown with white shadow)

Add the following line to the /etc/pam.d/login file, if it does not already exist.session required pam_limits.so

Page 72: Oracle Cluster on CentOS Using CentOS Cluster Ware

Disable SELINUX on both nodes. Change the value to: disabled for the parameter SELINUX in the file /etc/selinux/config. This is for the chamge to be there after a system reboot. To change the SELINUX while the systems are online, do the following (both nodes) echo 0 > /selinux/enforce

Page 73: Oracle Cluster on CentOS Using CentOS Cluster Ware

Move the oracle binary cds to /home/oracle and unzip them both (as user oracle) on node db1

Install the following rpms (or greater versions), for Oracle to install properly (on both nodes)

binutils-2.17.50.0.6

compat-libstdc++-33-3.2.3

compat-libstdc++-33-3.2.3 (32 bit)

elfutils-libelf-0.125

elfutils-libelf-devel-0.125

gcc-4.1.2

gcc-c++-4.1.2

glibc-2.5-24

glibc-2.5-24 (32 bit)

glibc-common-2.5

glibc-devel-2.5

glibc-devel-2.5 (32 bit)

glibc-headers-2.5

ksh-20060214

Page 74: Oracle Cluster on CentOS Using CentOS Cluster Ware

libaio-0.3.106

libaio-0.3.106 (32 bit)

libaio-devel-0.3.106

libaio-devel-0.3.106 (32 bit)

libgcc-4.1.2

libgcc-4.1.2 (32 bit)

libstdc++-4.1.2

libstdc++-4.1.2 (32 bit)

libstdc++-devel 4.1.2

make-3.81

sysstat-7.0.2

unixODBC-2.2.11

unixODBC-2.2.11 (32 bit)

unixODBC-devel-2.2.11

unixODBC-devel-2.2.11 (32 bit)

You may check with the following command, to verify that the rpm is already installed (greater versions are ok). The following are some examples for a few rpms.

[root@db1 oracle]# rpm -qa|grep –i sysstat

sysstat-7.0.2-11.el5

[root@db1 oracle]# rpm -qa|grep -i libaio

libaio-devel-0.3.106-5

libaio-0.3.106-5 THIS SHOWS THAT BOTH 32bit AND 64bit ARE INSTALLED BECAUSE IT IS DISPLAYED TWICE

libaio-0.3.106-5 THIS SHOWS THAT BOTH 32bit AND 64bit ARE INSTALLED BECAUSE IT IS DISPLAYED TWICE

libaio-devel-0.3.106-5

[root@db1 oracle]# rpm -qa|grep -i libstdc++

compat-libstdc++-33-3.2.3-61

libstdc++-devel-4.1.2-51.el5

compat-libstdc++-33-3.2.3-61

libstdc++-4.1.2-51.el5

Page 75: Oracle Cluster on CentOS Using CentOS Cluster Ware

libstdc++-4.1.2-51.el5

If it is not installed then perform yum install rpm-name for all the above rpms.

NOTE: Another way to find and install the necessary rpms (if for example Internet is not accessible), would be the CentOS DVD. You have to insert the DVD/CD into the CDROM, then go to /media/cdrom (check where the CD will auto mount itself), and then move to the directory Server. In this directory you should find the rpms that you should install in both systems.

Use the command: rpm –ivh rpm-name to install

rpm –qa | grep –i rpm-name to verify the existence

Page 76: Oracle Cluster on CentOS Using CentOS Cluster Ware

To install Oracle database in the cluster Virtual Hostname (oraprd.dyndns.org), we do the following trick.

a. Login as root in db1 and issue: hostname oraprd.dyndns.orgb. Login as oracle user, and start the installer, on node db1 (cd /home/oracle/database):c. ./runInstaller (Don’t forget to have Xwin running on your laptop, and to export

DISPLAY=yourIP:0.0)d. After the installation we reverse back the hostname to db (as user root)e. This trick is made only for oracle dbconsole to work properly in both nodes. Dbconsole binds

itself with the actual hostname, and cannot be changed later.

Page 77: Oracle Cluster on CentOS Using CentOS Cluster Ware

Say yes to the warning regarding the email address

Page 78: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 79: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 80: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 81: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 82: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Fix & Check again. Then go to /tmp/CVU_11.2.0.1.0_oracle and run (as root): ./runfixup.sh

Page 83: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click Ok to continue

Page 84: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 85: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on finish. DO NOT FORGET TO COPY THE FIXES TO THE OTHER NODE ALSO (db2). In our case, the problem was only two values in /etc/sysctl.conf We should copy these values to db2, in the file /etc/sysctl.conf and perfrom sysctl –p. The values are: kernel.sem = 250 100. Probably, if you use the values mentioned earlier (page 56), for the file sysctl.conf, you will have no problem.

If the oracle checks has no anymore findings, you can click on Finish

Page 86: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 87: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 88: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 89: Oracle Cluster on CentOS Using CentOS Cluster Ware

You may view the install log with the following command, all the time:

When prompted, run the following scripts as root, and click Ok

Page 90: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 91: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 92: Oracle Cluster on CentOS Using CentOS Cluster Ware

Make sure that all entries in /etc/sysctl.conf file from db1 are identical of those in db2. If necessary, copy the file from db1 to db2 and do: sysctl –p

Last lines from file /etc/sysctl.conf, in node db1 – Copy them to db2 also and do : sysctl -p

# Controls the maximum number of shared memory segments, in pages

kernel.shmall = 4294967296

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

kernel.sem = 250 32000 100 128

Login as oracle and run netca (Network Configuration Assistant)

Page 93: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 94: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 95: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 96: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 97: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 98: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 99: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 100: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 101: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 102: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 103: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 104: Oracle Cluster on CentOS Using CentOS Cluster Ware

To create the Oracle Database, login as oracle and run dbca (Database Configuration Assistant)

Page 105: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 106: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 107: Oracle Cluster on CentOS Using CentOS Cluster Ware

Use the same passwords for all Oracle sys accounts: abc123

Page 108: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 109: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 110: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Sample Schemas only if it is a test database

Page 111: Oracle Cluster on CentOS Using CentOS Cluster Ware

Accept the default memory distribution – This is normally good for most installations (If you need more memory for other resources, you may decrease the percentage to 30%

Oracle processes are the maximum number of connections allowed to connect to the Oracle server. This is DBA decision again, and depend on the project.

Page 112: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 113: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Finish

Page 114: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Ok

Page 115: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 116: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 117: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Exit

Page 118: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 119: Oracle Cluster on CentOS Using CentOS Cluster Ware

Copy /etc/oratab and /etc/oraInst.loc files, from db1 to db2

Switch hostname back to normal (from oraprd to db1)

root@oraprd etc#hostname db1 (as user root)

Create a TESTDB.sh script on /etc/init.d on both nodes. This is the script used from the cluster to stop/start/status the TESTDB service. You can view this script from the /etc/init.d/TESTDB.sh

Page 120: Oracle Cluster on CentOS Using CentOS Cluster Ware

Open system-config-cluster and ad the TESTDB.sh script resource

Page 121: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 122: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Edit Service Properties, choose the last resource (redo2) and click on Attached a Share Resource to the selection.

Page 123: Oracle Cluster on CentOS Using CentOS Cluster Ware

Select the TESTDB.sh script and click Ok

Page 124: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 125: Oracle Cluster on CentOS Using CentOS Cluster Ware
Page 126: Oracle Cluster on CentOS Using CentOS Cluster Ware

Click on Send to Cluster

Page 127: Oracle Cluster on CentOS Using CentOS Cluster Ware

Create the following soft links (as oracle user), in order for the oracle dbconsole to properly work. We use the real hostnames (db1, db2) and we link them to the Virtual host name (oraprd).

cd /orabin/oracle/product/11.2.0/TESTDB/oc4j/j2ee

ln -s OC4J_DBConsole_oraprd.dyndns.org_TESTDB/ OC4J_DBConsole_db1_TESTDB

ln -s OC4J_DBConsole_oraprd.dyndns.org_TESTDB/ OC4J_DBConsole_db2_TESTDB

cd /orabin/oracle/product/11.2.0/TESTDB

ln -s oraprd.dyndns.org_TESTDB/ db1_TESTDB

ln -s oraprd.dyndns.org_TESTDB/ db2_TESTD

As user oracle, go to $ORACLE_HOME/network/admin and insert the lines in White shadow, into the file listener.ora. Save the file and then do: lsnrctl reload

Page 128: Oracle Cluster on CentOS Using CentOS Cluster Ware

Also, in the same directory as before, vi the file tnsnames.ora and change the SERVICE_NAME to show SID

Page 129: Oracle Cluster on CentOS Using CentOS Cluster Ware

Perform two failover tests, as user root on the TESTDB service, to check Cluster functionality

clusvsadm –r TESTDB

clustat

cman_tool status

cman_tool nodes

Page 130: Oracle Cluster on CentOS Using CentOS Cluster Ware

Login into the database (via local firefox from db1 or db2), and check database health (DBA task). Passwords for sys/system accounts is abc123

Page 131: Oracle Cluster on CentOS Using CentOS Cluster Ware

END OF INSTALLATION