30
INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX

step by step 10g R2 RAC instalation on HP-UX

Embed Size (px)

DESCRIPTION

Installation Guide for 10g Release 2 RAC on HP UNIX 11.23

Citation preview

Page 1: step by step 10g R2 RAC instalation on HP-UX

INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX

Page 2: step by step 10g R2 RAC instalation on HP-UX

CONTENTS

HARDWARE CONSIDERATIONS

SOFTWARE CONSIDERATIONS

STORAGE CONSIDERATIONS

CLUSTER MANAGEMENT CONSIDERATIONS

INSTALLATION OF ORACLE SOFTWARE

Page 3: step by step 10g R2 RAC instalation on HP-UX

HARDWARE CONSIDERATIONS:

1. SYSTEM REQUIREMENTS

2. NETWORK REQUIREMENTS

Page 4: step by step 10g R2 RAC instalation on HP-UX

SYSTEM PARAMETERS REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

PARAMETER NAME RECOMMENDED VALUE AVAILABLE VALUERAM SIZE 512 MB 16 GBSWAP SPACE 2* RAM SIZE (1 GB approx) 20 GBDISK SPACE IN TMP DIRECTORY

400 MB 3 GB

TOTAL DISK SPACE 1 GB 20 GBOPERATING SYSTEM HP-UX 11.23 (Itanium2), 11.23

(PA-RISC), 11.11 (PA-RISC)HP-UX 11.23(PA-RISC)

COMPILER HP-UX 11i (11.11), HP-UX 11i v2 (11.23)

?

LINKS 9 LINKS NEED TO BE INSTALLED INSATLLED THE LINKS

ASYNCHRONOUS I/O PRESENT BY DEFAULT PRESENTHP SERVICEGUARD HP Serviceguard A.11.16,

SGeRAC A.11.16.

HP Serviceguard A.11.16, SGeRAC A.11.16.

NETWORK PARAMETERS REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

PARAMETER NAME RECOMMENDED VALUE AVAILABLE VALUENETWORK ADAPTERS TWO (1. PUBLIC INTERFACE)

(2. PRIVATE INTERFACE)VALUES ARE ASSIGNED

INTERFACE NAME ASSOCIATED WITH NETWORK ADAPTERS

SAME ON ALL THE NODES INTERFACE NAMES ARE PROVIDED

REMOTE COPY (rcp) ENABLED ENABLED

Page 5: step by step 10g R2 RAC instalation on HP-UX

NAME OF THE TWO NODES MADE AT CRIS:1. prod_db12. prod_db2

PAREMETER NAME GRANTED VALUEPUBLIC IP ADDRESS & ASSOCIATED HOSTNAME REGISTERED IN THE DNS FOR PUBLIC NETWORK INTERFACE

THE REQUIRED IP ADDRESSES ARE PROVIDED

PRIVATE IP ADDRESS FOR PRIVATE NETWORK INETRFACE

THE REQUIRED IP ADDRESSES ARE PROVIDED

VIP ADDRESS PER NODE WITH DEFINED HOSTNAME & RESOLVED THROUGH DNS

THE REQUIRED IP ADDRESSES ARE PROVIDED

Page 6: step by step 10g R2 RAC instalation on HP-UX

SOFTWARE CONSIDERATIONS:

1. PATCHES REQUIRED

2. KERNEL PARAMETER SETTINGS

Page 7: step by step 10g R2 RAC instalation on HP-UX

PATCHES REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

HP-UX 11.23 (ITANIUM2 / PA-RISC):

HP-UX B.11.23.0409 OR LATER

QUALITY PACK BUNDLE:

LATEST PATCH BUNDLE: QUALITY PACK PATCHES FOR HP-UX 11I V2, MAY 2005

HP-UX 11.23 PATCHES:

PHSS_32502: ARIES CUMULATIVE PATCH (REPLACED PHSS_29658)

PHSS_33275: LINKER + FDP CUMULATIVE PATCH (REPLACED PHSS_31856,PHSS_29660)

PHSS_29655: AC++ COMPILER (A.05.52)

PHSS_29656: HP C COMPILER (A.05.52)

PHSS_29657: U2COMP/BE/PLUGIN LIBRARY PATCH

PHKL_31500: 11.23 SEPT04 BASE PATCH (REPLACED PHKL_29817,PHCO_29957, PHKL_30089, PHNE_30090,PHNE_30093,PHKL_30234,PHKL_30245)

ALL THE PATCHES ARE INSTALLED AND THE REQUIRED SOFWARE CONSIDERATIONS ARE MET.

Page 8: step by step 10g R2 RAC instalation on HP-UX

KERNEL CONFIGURATION REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

PARAMETER NAME RECOMMENDED VALUE ASSIGNED VALUEnproc 4096 4200msgmni Nproc 4200ksi_alloc_max (nproc*8) 33600maxdsiz 1073741824 1073741824maxdsiz_64bit 2147483648 2147483648Maxuprc ((nproc*9)/10) 3780Msgmap (msgmni+2) 4202msgtql Nproc 4200msgseg (nproc*4); at least 32767 32767ninode (8*nproc+2048) 35648ncsize (ninode+1024) 36672nflocks Nproc 4200semmni (nproc*2) 4200semmns (semmni*2) 8400semmnu (nproc-4) 4196shmmax 1073741824 4292870144shmmni 512 512Shmseg 120 120swchunk 4096 4096Semvmx 32767 32767Vps_ceiling 64 64Maxssiz 134217728 134217728Maxssiz_64bit 1073741284 1073741284

Page 9: step by step 10g R2 RAC instalation on HP-UX

STORAGE CONSIDERATIONS :

1. STORAGE OPTION FOR ORACLE CRS, DATABASE AND RECOVERY

FILES

2. CONFIGURING DISKS FOR AUTOMATIC STORAGE MANAGEMENT

3. COFIGURING RAW LOGICAL VOLUMES

Page 10: step by step 10g R2 RAC instalation on HP-UX

STORAGE CONSIDERATION FOR ORACLE CRS, DATABASE AND RECOVERY FILES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

The following table shows the storage options supported for storing Oracle Cluster Ready Services (CRS) files, Oracle Database Files, and Oracle Database Recovery Files. Oracle Database Files include data files, control files, and redo log files, the server parameter file, and the password file. Oracle CRS files include the oracle cluster registry (OCR) and the CRS voting disk. Oracle recovery files includes archive log files.

STORAGE OPTION CRS DATABASE RECOVERY

AUTOMATIC STORAGE MANAGEMENT

NO YES YES

SHARED RAW LOGICAL VOLUMES (REQUIRES SGERAC)

YES YES NO

SHARED RAW DISK DEVICES AS PRESENTED TO HOSTS

YES YES NO

SHARED RAW PARTITIONS (ITANIUM2 ONLY)

YES YES NO

VERITAS CFS (PLANNED SUPPORT FOR RAC10G IN DEC05)

YES YES YES

 

CONFIGURING OF CRS FILES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

Create a Raw Device for:

File Size:Name Given To The File:

Comments:

OCR (Oracle Cluster Registry)

100 MB

ora_ocr_raw_100m

This raw logical volume was created only once on the cluster. If more than one database are created on the cluster, they all share the same Oracle cluster registry.

Oracle CRS voting disk

20 MB ora_vote_raw_20m This raw logical volume also needs to be created only once on the cluster. If more than one database are created on the cluster, they all share the same Oracle CRS voting disk.

Page 11: step by step 10g R2 RAC instalation on HP-UX

The command given on both the nodes to make the disks available and the resultant output obtained are as following: -

# /usr/sbin/ioscan -fun -C disk

The output from this command is similar to the following:

Class I H/W Path   Driver S/W State H/W Type Description============================================================================disk  4 255/255/0/0.0  sdisk  CLAIMED   DEVICE   HSV100 HP                           /dev/dsk/c8t0d0 /dev/rdsk/c8t0d0

disk  5 255/255/0/0.1  sdisk  CLAIMED   DEVICE   HSV100 HP                           /dev/dsk/c8t0d1 /dev/rdsk/c8t0d1

This command displays information about each disk attached to the system, including the block device name (/dev/dsk/cxtydz) and character raw device name (/dev/rdsk/cxtydz).

CONFIGURING DISKS FOR AUTOMATIC STORAGE MANAGEMENT BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -

Automatic Storage Management (ASM) is a feature in Oracle Database 10g that provides the database administrator with a simple storage management interface that is consistent across all server and storage platforms. As a vertically integrated file system and volume manager, purpose-built for Oracle database files, ASM provides the performance of async I/O with the easy management of a file system. ASM provides capability that saves the DBA’s time and provides flexibility to manage a dynamic database environment with increased efficiency.

Automatic Storage Management is part of the database kernel. It is linked into $ORACLE_HOME/bin/oracle so that its code may be executed by all database processes. One portion of the ASM code allows for the start-up of a special instance called an ASM Instance. ASM Instances do not mount databases, but instead manage the metadata needed to make ASM files available to ordinary database instances.

ASM instances manage the metadata describing the layout of the ASM files. Database instances access the contents of ASM files directly, communicating with an ASM instance only to get information about the layout of these files. This requires that a second portion of the ASM code run in the database instance, in the I/O path.

Four disk groups are created at CRIS namely ASMdb1, ASM db2, ASM db3 and ASMARCH. For each disk that has to be added to a disk group, enter the following command to verify that it is not already part of an LVM volume group:

Page 12: step by step 10g R2 RAC instalation on HP-UX

# /sbin/pvdisplay /dev/dsk/cxtydz

If this command displays volume group information, the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group. The device paths must be the same from both systems and if not same they are mapped to one virtual device name.

The following commands are executed to change the owner, group, and permissions on the character raw device file for each disk that is added to a disk group:

# chown oracle:dba /dev/rdsk/cxtydz# chmod 660 /dev/rdsk/cxtydz

The redundancy level chosen for the ASM disk group is the External Redundancy, which had an intelligent subsystem an HP Storage Works EVA or HP Storage Works XP.

Useful ASM v$ views commands:

 

View ASM Instance DB Instance

V$ASM_CLIENT Shows each database instance using an ASM disk group

Shows the ASM instance if the database has open ASM files.

V$ASM_DISK Shows disk discovered by the ASM instance, including disks, which are not part of any disk group.

Shows a row for each disk in disk groups in use by the database instance.

V$ASM_DISKGROUP

Shows disk groups discovered by the ASM instance.

Shows each disk group mounted by the local ASM instance.

V$ASM_FILE Displays all files for each ASM disk group

Returns no rows

Page 13: step by step 10g R2 RAC instalation on HP-UX

CONFIGURING RAW LOGICAL VOLUMES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE2 ON HP-UX: -

Create a Raw Device for: File Size: Sample Name:

SYSTEM tablespace 500 MBdbname_system_raw_500m

SYSAUX tablespace 300 + (Number of instances * 250)

dbname_sysaux_raw_800m

An undo tablespace per instance

500 MBdbname_undotbsn_raw_500m

EXAMPLE tablespace 160 MBdbname_example_raw_160m

USERS tablespace 120 MBdbname_users_raw_120m

Two ONLINE redo log files per instance

120 MBdbname_redon_m_raw_120m

First and second control file 

110 MBdbname_control[1|2]_raw_110m

TEMP tablespace 250 MBdbname_temp_raw_250m

Server parameter file (SPFILE):

5 MBdbname_spfile_raw_5m

Password file 5 MBdbname_pwdfile_raw_5m

OCR (Oracle Cluster Repository)

100 MBora_ocr_raw_100m

Oracle CRS voting disk 20 MB ora_vote_raw_20m

Checking to see if the volume groups are properly created and available using the following commands:

# strings /etc/lvmtab# vgdisplay –v /dev/vg_rac

Changing the permissions of the database volume group vg_rac to 777, and change the permissions of all raw logical volumes to 660 and the owner to oracle:dba.

# chmod 777 /dev/vg_rac

Page 14: step by step 10g R2 RAC instalation on HP-UX

# chmod 660 /dev/vg_rac/r* # chown oracle:dba /dev/vg_rac/r*Change the permissions of the OCR logical volumes:

# chown root:oinstall /dev/vg_rac/rora_ocr_raw_100m# chmod 640 /dev/vg_rac/c8t0d0s1

To enable Database Configuration Assistant (DBCA) later to identify the appropriate raw device for each database file, a raw device-mapping file must be created, as follows:

Set the ORACLE_BASE environment variable:

$ export ORACLE_BASE=/opt/oracle/product

Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

# mkdir -p $ORACLE_BASE/oradata/<dbname># chown -R oracle:oinstall $ORACLE_BASE/oradata# chmod -R 775 $ORACLE_BASE/oradata

Change directory to the $ORACLE_BASE/oradata/dbname directory.

Enter a command similar to the following to create a text file that you can be used to create the raw device mapping file:

# find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf

Create the dbname_raw.conf file that looks similar to the following:

system=/dev/vg_name/rdbname_system_raw_500msysaux=/dev/vg_name/rdbname_sysaux_raw_800mexample=/dev/vg_name/rdbname_example_raw_160musers=/dev/vg_name/rdbname_users_raw_120mtemp=/dev/vg_name/rdbname_temp_raw_250mundotbs1=/dev/vg_name/rdbname_undotbs1_raw_500mundotbs2=/dev/vg_name/rdbname_undotbs2_raw_500mredo1_1=/dev/vg_name/rdbname_redo1_1_raw_120mredo1_2=/dev/vg_name/rdbname_redo1_2_raw_120mredo2_1=/dev/vg_name/rdbname_redo2_1_raw_120mredo2_2=/dev/vg_name/rdbname_redo2_2_raw_120mcontrol1=/dev/vg_name/rdbname_control1_raw_110mcontrol2=/dev/vg_name/rdbname_control2_raw_110mspfile=/dev/vg_name/rdbname_spfile_raw_5mpwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m

Page 15: step by step 10g R2 RAC instalation on HP-UX

When we are configuring the Oracle user's environment, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file:

$ export DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf

CLUSTER MANAGEMENT CONSIDERATIONS:

1. CONFIGURATION OF HP SERVICEGUARD CLUSTER

Page 16: step by step 10g R2 RAC instalation on HP-UX

CLUSTER MANAGEMENT CONSIDERATIONS BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE2 ON HP-UX: -

Oracle RAC 10 g includes its own Clusterware and package management solution with the database product. This Clusterware is included as part of the Oracle RAC 10g bundle. Oracle Clusterware consists of Oracle Cluster Ready Services (CRS) and Oracle Cluster Synchronization Services (CSS).

CRS supports services and workload management and helps to maintain the continuous availability of the services. CRS also manages resources such as virtual IP (VIP) address for the node and the global services daemon. CSS provides cluster management functionality in case that no vendor clusterware such as HP Serviceguard is used.

CONFIGURATION OF HP SERVICEGUARD CLUSTER: -

After all the LAN cards are installed and configured, and the RAC volume group and the cluster lock volume group(s) are configured, cluster configuration is started. Activate the lock disk on the configuration node ONLY. Lock volume can only be activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly.# vgchange -a y /dev/vg_rac

Creation of a cluster configuration template:

# cmquerycl –n nodeA –n nodeB –v –C /etc/cmcluster/rac.asc

Check the cluster configuration:

# cmcheckconf -v -C rac.asc

Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster:

# cmapplyconf -v -C rac.asc

Cluster is not started until cmrunnode on each node or cmruncl command are run.

Page 17: step by step 10g R2 RAC instalation on HP-UX

De-activate the lock disk on the configuration node after cmapplyconf command.

# vgchange -a n /dev/vg_rac

Start up the cluster and view it to be sure its up and running.

Start the cluster from any node in the cluster

# cmruncl -v or, on each node # cmrunnode -v

Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster configuration node. This has to be done only once.

# vgchange -S y -c y /dev/vg_rac

Then on all the nodes, activate the volume group in shared mode in the cluster. This has to be done each time when the cluster is started.

# vgchange -a s /dev/vg_rac

Check the cluster status:

# cmviewcl –v

Page 18: step by step 10g R2 RAC instalation on HP-UX

INSTALLATION OF ORACLE SOFTWARE:

1. INSTALLATION OF ORACLE CLUSTER READY SERVICES

2. INSTALLATION OF ORACLE DATABASE RAC 10 g

3. CREATION OF ORACLE DATABASE USING DATABASE CONFIGURATION ASSISTANT

4. ORACLE ENTERPRISE MANAGER 10 g DATABASE CONTROL

Page 19: step by step 10g R2 RAC instalation on HP-UX

INSTALLATION OF ORACLE CLUSTER READY SERVICES: -

Before the installation of CRS a user is created who owns the Oracle RAC software. Before CRS is installed, the storage option is chosen that is to be used for the Oracle Cluster Registry (100 MB) and CRS voting disk (20 MB). Automatic Storage Management cannot be used to store these files, because they must be accessible before any Oracle instance starts. Display is to be set first before the installation of the CRS. Steps involved in the installation of CRS are as follows: -

Login as Oracle User and set the ORACLE_HOME environment variable to the CRS Home directory. Then start the Oracle Universal Installer from Disk1 by issuing the command $. /runInstaller.sh. Click next on the OUI welcome screen.

Enter the inventory location and oinstall as the UNIX group name information into the Specify Inventory Directory and Credentials page, click Next. The OUI dialog indicates then that you should run the oraInventory location/orainstRoot.sh script. Run the orainstRoot.sh script as root user, click Continue.

The Specify File Locations Page contains predetermined information for the source of the installation files and the target destination information. Enter the CRS home name and its location in the target destination.

In the next Cluster Configuration Screen the cluster name as well as the node information is specified. If HP Serviceguard is running, then the OUI installs CRS on each node on which the OUI detects that HP Serviceguard is running. If HP Serviceguard is not running, then OUI is used to select the nodes on which to install CRS. The private node name is used by Oracle for Cache Fusion processing. The private node name is configured in the /etc/hosts file of each node in the cluster. The interface names associated with the network adapters for each network are same on all nodes, e.g. lan0 for private interconnect and lan1 for public interconnect.

Page 20: step by step 10g R2 RAC instalation on HP-UX

In the Private Interconnect Enforcement page the OUI displays a list of cluster-wide interfaces. Here with the use of drop-down menus each interface as Public, Private is specified.

When Next is clicked on the Private Interconnect Enforcement page, the OUI looks for the Oracle Cluster Registry file ocr.loc in the /var/opt/oracle directory. If the ocr.loc file already exists, and if the ocr.loc file has a valid entry for the Oracle Cluster Registry (OCR) location, then the Voting Disk Location page appears otherwise, the Oracle Cluster Registry Location Information page appears and the ocr.loc path is specified there.

On the Voting Disk Information Page, a complete path and file name for the file in which the voting disk is to be stored is specified and Next is clicked. This must be a shared raw device (/dev/rdsk/cxtxdx).

It is verified that the OUI should install the components shown on the Summary page and then the components are installed. During the installation, the OUI first copies software to the local node and then copies the software to the remote nodes.

Then the OUI displays a dialog indicating that root.sh script must be run on all the nodes. Execution of the root.sh script on one node at a time is done and OK is clicked in the dialog that root.sh displays after it completes each session. Another session of root.sh is started on another node after the previous root.sh execution is complete

When the OUI displays the End of Installation page, click Exit to exit the Installer.

INSTALLATION OF ORACLE DATABASE RAC 10g: -

This part describes phase two of the installation procedures for installing the Oracle Database 10g with Real Application Clusters (RAC).

Login as Oracle User and the ORACLE_HOME environment variable is set to the Oracle Home directory. Then start the Oracle Universal Installer from Disk1 by issuing the command $./runInstaller.

When the OUI displays the Welcome page, click Next, and the OUI displays the Specify File Locations page. The Oracle home name and path that is used in this step must be different from the home that is used during the CRS installation in phase one.

On the Specify Hardware Cluster Installation Mode page, an installation mode is selected. The Cluster Installation mode is selected by default when the OUI detects that this installation is performed on a cluster. In addition, the local node is always selected for the installation. Additional nodes that are to be part of this installation session are selected and click Next.

Page 21: step by step 10g R2 RAC instalation on HP-UX

On the Install Type page Enterprise Edition is selected.

On the Create a Starter Database Page a software installation only is chosen. The Summary Page displays the software components that the OUI will install

and the space available in the Oracle home with a list of the nodes that are part of the installation session. The details are verified about the installation that appear on the Summary page and click Install or click Back to revise the installation. During the installation, the OUI copies software to the local node and then copies the software to the remote nodes.

Then OUI prompts to run the root.sh script on all the selected nodes. Execution of the root.sh script is performed on one node at a time. The first root.sh script brings up the Virtual Internet Protocol Configuration Assistant (VIPCA).  After the VIPCA completion, root.sh script is run on the second node.

On the Public Network Interfaces page the public network interface cards (NICs) to which VIP addresses are to be assigned are selected.

On the IP Address page an unused (unassigned) public virtual IP address for each node displayed on OUI page is assigned and click Next. If the virtual hostname / virtual IP address is not yet known in the DNS, it has to be configured in the /etc/hosts file on both systems. Please ensure that the same Subnet Mask that is also configured for the public NIC is entered.

After Next is clicked, the VIPCA displays a Summary page. Review the information on this page and click Finish. A progress dialog appears while the VIPCA configures the virtual IP addresses with the network interfaces that were selected. The VIPCA then creates and starts the VIPs, GSD, and Oracle Notification Service (ONS) node applications.

When the configuration is complete, click OK to see the VIPCA session results. Review the information on the Configuration Results page, and click Exit to exit the VIPCA.

/oracle/10g/root.sh is run on the second node and output is checked with the help of # crs-stat -t which gives a compact output.

CREATION OF ORACLE DATABASE USING DATABASE CONFIGURATION ASSISTANT: -

Connect as oracle user and start the Database Configuration Assistant by issuing the command $ dbca .

The first page that the DBCA displays is the Welcome page for RAC. The DBCA displays this RAC-specific Welcome page only if the Oracle home from which it is invoked was cluster installed. If the DBCA does not display this Welcome page for RAC, then the DBCA was unable to detect whether the

Page 22: step by step 10g R2 RAC instalation on HP-UX

Oracle home is cluster installed. Select Real Application Clusters database, click Next.

At the Configure Database Options page select Create a database and click Next.

At the Node Selection page the DBCA highlights the local node by default. The other nodes are selected which we want to configure as members of our cluster database, click Next.

The templates on the Database Templates page are Custom Database, Transaction Processing, Data Warehouse, and General Purpose. General-purpose database is selected, click Next.

At the Database Identification page the global database name is entered and the Oracle system identifier (SID) prefix for our database and click Next.

On the Management Options page, we can choose to manage our database with Enterprise Manager. On UNIX-based systems only, we can also choose either the Grid Control or Database Control option if we select Enterprise Manager database management.

Then at the Database Credentials page we can enter the passwords for our database.

At the Storage Options page we selected a storage type for the database. On the HP-UX platform there is no Cluster File System.

To initiate the creation of the required ASM instance, the password for the SYS user of the ASM instance is supplied. Either an IFILE or an SPFILE can be selected on shared storage for the instances. After the required information is entered, click Next to create the ASM instance.

Once the instance is created, DBCA proceeds to the ASM Disk Groups page that allows creating a new disk group, add disks to an existing disk group, or select a disk group for database storage. When a new ASM instance is created, then there will be no disk groups from which to select, so a new one is created by clicking Create New to open the Create Disk Group page.

At the Create Disk Group page disk group name is entered and then the redundancy level for the group is checked and external redundancy level is selected and NEXT is clicked.

At the Database File Locations page Oracle-Managed Files are selected. On the Recovery Configuration page, when ASM is used, then we can also

select the flash recovery area and size on the Recovery Configuration page. When a pre configured database template is selected, such as the General

Purpose template, then the DBCA displays the control files, datafiles, and

Page 23: step by step 10g R2 RAC instalation on HP-UX

redo logs on the Database Storage page. The folder and the file name underneath the folder are selected to edit the file name.

On the Creation Options page, Create Database is selected and clicked Finish. Reviewed the Summary dialog information and clicked OK to create the

database.

ORACLE ENTERPRISE MANAGER 10 g DATABASE CONTROL: -

When the database software is installed, the OUI also installs the software for Oracle Enterprise Manager Database Control and integrates this tool into the cluster environment. Once installed, Enterprise Manager Database Control is fully configured and operational for RAC. We can also install Enterprise Manager Grid Control onto other client machines outside our cluster to monitor multiple RAC and single-instance Oracle database environments.

Start the DBConsole agent on one of the cluster nodes as Oracle user:$ emctl start dbconsole 

To connect to the Oracle Enterprise Manager Database Control (default port 5500) open the following URL in the web browser: http://<node1a>:5500/em

Login as sys/manager and sysdba profile.

Accepted the licensing.

Now OEM Database Control Home Page is reached.

With this the installation of Oracle 10g RAC on HP-UX at CRIS was done and the project was completed successfully.