Upload
palash-sarkar
View
243
Download
0
Embed Size (px)
DESCRIPTION
RAC on SunCluster
Citation preview
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 1/27
Thanks to Source:
http://www.iselfschooling.com/mc4articles/RACinstallation.htm
How to install Oracle RAC on Sun Cluster?
More Resources
by Google:
Step-By-Step Installation of RAC on Sun Cluster vThis document will provide the reader with step-!-step instructions on how to install
a cluster" install #racle Real Application Clusters $RAC% and start a cluster dataase
on Sun Cluster v&. 'or additional e(planation or information on an! of these steps"
please see the references listed at the end of this document.
!" Configuring t#e Clusters Har$ware
!"! Mini%al Har$ware list & Syste% Re'uire%ents'or a two node cluster the following would e a minimum recommended hardware
list.
!"!"! Har$ware• (or Sun)*+ servers, Sun or t#ir$-party storage pro$ucts, Cluster
interconnects, ublic networ.s, Switc# options, Me%ory, Swap / C0
re'uire%ents consult the operating s!stem or hardware vendor. Sun)s
interconnect" RS*" is now supported in v+.,.., with patch numer ,44++.
• Syste% $is. partitions
• /globaldevices - a 0* file s!stem that will e used ! the
scinstall$0*% utilit! for gloal devices.
• 1olume manager - a 0* partition for volume manager use on a slice
at the end of the disk $slice 2%. 3f !our cluster uses 1R3TAS 1olume
*anager $1(1*% and !ou intend to encapsulate the root disk" !ou
need two unused slices availale for use ! 1(1*.
As with an! other s!stem running the Solaris #peratingS!stem $S5ARC% environment" !ou can configure the root
$/%" /var" /usr" and /opt directories as separate file s!stems" or!ou can include all the directories in the root $/% file s!stem.
The following descries the software contents of the root $/%"/var" /usr" and /opt directories in a Sun Cluster configuration.
Consider this information when !ou plan !our partitioningscheme.root 1&2 - The Sun Cluster software itself occupies less than 4*!tes of space in the root $/% file s!stem. 'or est results" !ou
need to configure ample additional space and inode capacit!
for the creation of oth lock special devices and characterspecial devices used ! 1(1* software" especiall! if a large
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 2/27
numer of shared disks are in the cluster. Therefore" add atleast 0 *!tes to the amount of space !ou would normall!
allocate for !our root $/% files!stem.&var - The Sun Cluster software occupies a negligile amount
of space in the /var file s!stem at installation time. 6owever"!ou need to set aside ample space for log files. Also" more
messages might e logged on a clustered node than would efound on a t!pical standalone server. Therefore" allow at least
0 *!tes for the /var file s!stem.&usr - Sun Cluster software occupies less than , *!tes of
space in the /usr file s!stem. 1(1* software re7uire less than0 *!tes.&opt - Sun Cluster framework software uses less than , *!tesin the /opt file s!stem. 6owever" each Sun Cluster data service
might use etween 0 *!te and *!tes. 1(1* softwarecan use over 4 *!tes if all of its packages and tools areinstalled. 3n addition" most dataase and applications software
is installed in the /opt file s!stem. 3f !ou use Sun *anagementCenter software to monitor the cluster" !ou need an additional
, *!tes of space on each node to support the Sun*anagement Center agent and Sun Cluster module packages.
An e(ample s!stem disk la!out is as follows:-
A sa%ple syste% $is. layout
Slic
e
Contents Allocation
1in Mbytes2
3escription
/ 008 441 Mbytes for Solaris #perating S!stem
$S5ARC% environment software.
100 Mbytes e(tra for root $/%.
100 Mbytes e(tra for /var.
25 Mbytes for Sun Cluster software.
55 Mbytes for volume manager software.
1 Mbyte for Sun Cluster 6A for 9'S
software.
25 Mbytes for the Sun *anagement Center agent and Sun Cluster module agent
packages.
421 Mbytes $the remaining free space on
the disk% for possile future use !
dataase and application software.
0 swap 2 *inimum sie when ph!sical memor! is
less than 750 Mbytes.
, overlap ,, The entire disk.
& /globaldevices 0 The Sun Cluster software later assigns this
slice a different mount point and mounts it
as a cluster file s!stem.
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 3/27
4 unused - Availale as a free slice for encapsulating
the root disk under 1(1*
unused -
8 unused -
2 volume manager 0 ;sed ! 1(1* for installation after !ou
free the slice.
!"!"4 Software• (or Solaris Operating Syste% 1SARC2)*+, Sun Cluster, 5olu%e
Manager an$ (ile Syste% supportconsult the operating s!stem vendor. Sun
Cluster have scalale services with <loal 'ile S!stems $<'S% ased around
the 5ro(! 'ile S!stems $5='S%. 5='S allows file access locations transparent
and is Sun)s implementation of Cluster 'ile S!stems. Currentl!" Sun is not
supporting an!thing relating to <'S for RAC. Check with Sun for updates onthis status.
!"!" atc#es
The Sun Cluster nodes might re7uire patches in the following areas:-• Solaris #perating S!stem $S5ARC% nvironment patches
• Storage Arra! interface firmware patches
• Storage Arra! disk drive firmware patches
• 1eritas 1olume *anager patches
Some patches" such as those for 1eritas 1olume *anager cannot einstalled until after the volume management software installation is
completed. >efore installing an! patches" alwa!s do the following:-• make sure all cluster nodes have the same patch levels
• do not install an! firmware-related patches without 7ualified assistance
• alwa!s otain the most current patch information
• read all patch RA?* notes carefull!.
Specific Solaris #perating S!stem $S5ARC% patches ma!e re7uiredand it is recommended that the latest Solaris #perating S!stem$S5ARC% release" Sun)s recommended patch clusters and Sun Cluster
updates are applied. Current Sun Cluster updates include release00/" update one 2/0" update two 0,/0 and update three /,. To
determine which patches have een installed" enter the followingcommands:
$ showrev -p
'or the latest Sun Cluster &. re7uired patches see SunSolve document id,4802 Sun Cluster &. arl! 9otifier.
!"4 Installing Sun Stor6$ge 3is. Arrays'ollow the procedures for an initial installation of a Stordge disk enclosures or
arra!s" prior to installing the Solaris #perating S!stem $S5ARC% operating
environment and Sun Cluster software. 5erform this procedure in con@unction with the
procedures in the Sun Cluster &. Software 3nstallation <uide and !our server
hardware manual. *ultihost storage in clusters uses the multi-initiator capailit! of
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 4/27
the Small Computer S!stem 3nterface $SCS3% specification. 'or conceptual
information on multi-initiator capailit!" see the Sun Cluster &. Concepts document.
!" Installing Cluster Interconnect an$ ublic 7etwor. Har$wareThe following procedures are needed for installing cluster hardware during an initial
cluster installation" efore Sun Cluster software is installed. Separate procedures needto e followed for installing thernet-ased interconnect hardware" 5C3-SC3-ased
interconnect hardware" and pulic network hardware $see Sun)s current installation
notes%.
• 3f not alread! installed" install host adapters in !our cluster nodes. 'or the
procedure on installing host adapters" see the documentation that shipped with
!our host adapters and node hardware. 3nstall the transport cales $and
optionall!" transport @unctions%" depending on how man! nodes are in !our
cluster:
• A cluster with onl! two nodes can use a point-to-point connection" re7uiring
no cluster transport @unctions. ;se a point-to-point $crossover% thernet caleif !ou are connecting 0>aseT or T5 ports of a node directl! to ports on
another node. <igait thernet uses the standard fier optic cale for oth
point-to-point and switch configurations.
Note: If you use a transport junction in a two-node cluster, you canadd additional nodes to te cluster witout brin!in! te cluster
offline to reconfi!ure te transport pat"• A cluster with more than two nodes re7uires two cluster transport @unctions.
These transport @unctions are thernet-ased switches $customer-supplied%.
ou install the cluster software and configure the interconnect after !ou have installed
all other hardware.
4" Creating a Cluster4"! Sun Cluster Software InstallationThe Sun Cluster v& host s!stem $node% installation process is completed in several
ma@or steps. The general process is:-
• repartition oot disks to meet SunCluster v&.
• install the Solaris #perating S!stem $S5ARC% nvironment software
• configure the cluster host s!stems environment
• install Solaris #perating S!stem $S5ARC% nvironment patches
• install hardware-related patches
• install Sun Cluster v& on the first cluster node
• install Sun Cluster v& on the remaining nodes
• install an! Sun Cluster patches and updates
• perform postinstallation checks and configuration
ou can use two methods to install the Sun Cluster v& software on the cluster nodes:-
• interactive installation using the scinstall installation interface
• automatic Bumpstart installation $re7uires a pre-e(isting Solaris #perating
S!stem $S5ARC% BumpStart server%
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 5/27
This note assumes an interactive installation of Sun Cluster v& with update ,. The Sun
Cluster installation program"scinstall" is located on the Sun Cluster v& C? in the
/cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools director!. hen
!ou start the program without an! options" it prompts !ou for cluster configuration
information that is stored for use later in the process. Although the Sun Cluster
software can e installed on all nodes in parallel" !ou can complete the installation onthe first node and then runscinstall on all other nodes in parallel. The additional
nodes get some asic information from the first" or sponsoring node" that was
configured.
4"4 (or% a One-7o$e ClusterAs root:- cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools ./scinstall!!! "ain "enu !!!
#lease select rom one o the ollowing %!& options'
! (& )stablish a new cluster using this machine as theirst node! 2& *dd this machine as a node in an established cluster3& Conigure a cluster to be +umpStarted rom thisinstall server,& *dd support or new data services to this cluster node& #rint release inormation or this cluster node
! & elp with menu options! & 1uit
ption' 1
!!! )stablishing a ew Cluster !!!...4o 5ou want to continue %5es/no& 65es7 yeshen prompted whether to continue to install Sun Cluster software packages" t!pe
!es.888 Sotware #ac9age :nstallation ;;;
:nstallation o the Sun Cluster ramewor9 sotwarepac9ages will ta9e a ew minutes to complete.
:s it o9a5 to continue %5es/no& 65es7 yes
!! :nstalling SunCluster 3.0 !!S<=scr.....done...it )T)> to continue'
After all packages are installed" press >eturn to continue to the ne(t screen.
Specif! the cluster name.
888 Cluster ame ;;;...
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 6/27
=hat is the name o the cluster 5ou want toestablish clustername
Run the preinstallation check.888 Chec9 ;;;
This step runs scchec9%("& to veri5 that certain basichardware andsotware pre-coniguration reuirements have been met. :scchec9%("&detects potential problems with coniguring this machineas a clusternode? a list o warnings is printed.
it )T)> to continue'
Specif! the names of the other nodes that will ecome part of this cluster.888 Cluster odes ;;;
...ode name' node2ode name %Ctrl-4 to inish&' <Control-D>
This is the complete list o nodes'...:s it correct %5es/no& 65es7
Specif! whether to use data encr!ption standard $?S% authentication.
>! default" Sun Cluster software permits a node to connect to the cluster onl! if the
node is ph!sicall! connected to the private interconnect and if the node name was
specified. 6owever" the node actuall! communicates with the sponsoring node over
the pulic network" since the private interconnect is not !et full! configured. ?Sauthentication provides an additional level of securit! at installation time ! enaling
the sponsoring node to more relial! authenticate nodes that attempt to contact it to
update the cluster configuration.
3f !ou choose to use ?S authentication for additional securit!" !ou must configure
all necessar! encr!ption ke!s efore an! node can @oin the cluster. See
the 9e5serv%("& and public9e5%,& man pages for details.
888 *uthenticating >euests to *dd odes ;;;...4o 5ou need to use 4)S authentication %5es/no& 6no7
Specif! the private network address and netmask.888 etwor9 *ddress or the Cluster Transport ;;;...:s it o9a5 to accept the deault networ9 address %5es/no&65es7:s it o9a5 to accept the deault netmas9 %5es/no& 65es7
9ote: ou cannot change the private network address after the cluster is successfull!
formed.
Specif! whether the cluster uses transport @unctions.
3f this is a two-node cluster" specif! whether !ou intend to use transport @unctions.888 #oint-to-#oint Cables ;;;...
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 7/27
4oes this two-node cluster use transport @unctions%5es/no& 65es7
Tip - ou can specif! that the cluster uses transport @unctions" regardless of whether
the nodes are directl! connected to each other. 3f !ou specif! that the cluster uses
transport @unctions" !ou can more easil! add new nodes to the cluster in the future.
3f this cluster has three or more nodes" !ou must use transport @unctions. 5ress Returnto continue to the ne(t screen.888 #oint-to-#oint Cables ;;;...Since this is not a two-node cluster? 5ou will be as9edto conigure two transport @unctions.it )T)> to continue'
?oes this cluster use transport @unctionsD
3f !es" specif! names for the transport @unctions. ou can use the default names
switch9 or create !our own names.888 Cluster Transport +unctions ;;;
...=hat is the name o the irst @unction in the cluster6switch(7=hat is the name o the second @unction in the cluster6switch27
Specif! the first cluster interconnect transport adapter.
T!pe help to list all transport adapters availale to the node.
888 Cluster Transport *dapters and Cables ;;;...=hat is the name o the irst cluster transport adapter%help& 6adapter7
ame o the @unction to which AadapterA is connected6switch(7<se the deault port name or the AadapterA connection%5es/no& 65es7
it )T)> to continue'
9ote: 3f !our configuration uses SC3 adapters" do not accept the default when !ou are
prompted for the adapter connection $the port name%. 3nstead" provide the port name
$" 0" ," or &% found on the ?olphin switch itself" to which the node is ph!sicall!
caled. The following e(ample shows the prompts and responses for declining the
default port name and specif!ing the ?olphin switch port name .<se the deault port name or the AadapterA connection%5es/no& 65es7 no=hat is the name o the port 5ou want to use 0Choose the second cluster interconnect transport adapter.
T!pe help to list all transport adapters availale to the node.
=hat is the name o the second cluster transport adapter%help& 6adapter7
ou can configure up to two adapters ! using the scinstall command. ou can
configure additional adapters after Sun Cluster software is installed ! using
the scsetup utilit!.
3f !our cluster uses transport @unctions" specif! the name of the second transport @unction and its port.
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 8/27
ame o the @unction to which AadapterA is connected6switch27<se the deault port name or the AadapterA connection%5es/no& 65es7
it )T)> to continue' 9ote: 3f !our configuration uses SC3 adapters" do not accept the default when !ou are
prompted for the adapter port name. 3nstead" provide the port name $" 0" ," or &%
found on the ?olphin switch itself" to which the node is ph!sicall! caled. The
following e(ample shows the prompts and responses for declining the default port
name and specif!ing the ?olphin switch port name .<se the deault port name or the AadapterA connection%5es/no& 65es7 no=hat is the name o the port 5ou want to use 0Specif! the gloal devices file s!stem name.888 Blobal 4evices ile S5stem ;;;
...The deault is to use /globaldevices.
:s it o9a5 to use this deault %5es/no& 65es7
?o !ou have an! Sun Cluster software patches to installD888 *utomatic >eboot ;;;...4o 5ou want scinstall to reboot or 5ou %5es/no& 65es7
Accept or decline the generated scinstall command. The scinstall command generated
from !our input is displa!ed for confirmation.888 Conirmation ;;;
Dour responses indicate the ollowing options toscinstall'
scinstall -i9 E...*re these the options 5ou want to use %5es/no& 65es74o 5ou want to continue with the install %5es/no& 65es7
3f !ou accept the command and continue the installation" scinstall processing
continues. Sun Cluster installation output is logged in
the /var/cluster/logs/install/scinstall.log.pid file" where pid is
the process 3? numer of the scinstall instance.
After scinstall returns !ou to the *ain *enu" !ou can rerun menu option 0 and
provide different answers. our previous session answers displa! as the defaults.
3nstall an! Sun Cluster software patches. See the Sun Cluster &. Release 9otes for
the location of patches and installation instructions. Reoot the node to estalish the
cluster. 3f !ou reooted the node after !ou installed patches" !ou do not need to reoot
the node a second time.
The first node reoot after Sun Cluster software installation forms the cluster and
estalishes this node as the first-installed node of the cluster. ?uring the final
installation process" the scinstall utilit! performs the following operations on the
first cluster node:-• installs cluster software packages
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 9/27
• disales routing on the node $touch /etc/notrouter%
• creates an installation log $/var/cluster/logs/install%
• reoots the node
• creates the ?isk 3? devices during the reoot
ou can then install additional nodes in the cluster.
4" Installing A$$itional 7o$esAfter !ou complete the Sun Cluster software installation on the first node" !ou can
run scinstall in parallel on all remaining cluster nodes. The additional nodes are
placed in install #ode so the! do not have a 7uorum vote. #nl! the first node has a
7uorum vote.
As the installation on each new node completes" each node reoots and comes up in
install mode without a 7uorum vote. 3f !ou reoot the first node at this point" all the
other nodes would panic ecause the! cannot otain a 7uorum. ou can" however"
reoot the second or later nodes freel!. The! should come up and @oin the clusterwithout errors.
Cluster nodes remain in install mode until !ou use the scsetup command to reset
the install mode.
ou must perform postinstallation configuration to take the nodes out of install mode
and also to estalish 7uorum disk$s%.
• nsure that the first-installed node is successfull! installed with Sun Cluster
software and that the cluster is estalished.
• 3f !ou are adding a new node to an e(isting" full! installed cluster" ensure that
!ou have performed the following tasks.
• 5repare the cluster to accept a new node.• 3nstall Solaris #perating S!stem $S5ARC% software on the new node.
• >ecome superuser on the cluster node to install.
• Start the scinstall utilit!. ./scinstall
!!! "ain "enu !!!
#lease select rom one o the ollowing %!& options'
! (& )stablish a new cluster using this machine asthe irst node! 2& *dd this machine as a node in an establishedcluster3& Conigure a cluster to be +umpStarted rom thisinstall server,& *dd support or new data services to this clusternode& #rint release inormation or this cluster node
! & elp with menu options! & 1uit
ption' 2
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 10/27
!!! *dding a ode to an )stablished Cluster !!!...4o 5ou want to continue %5es/no& 65es7 yes
• hen prompted whether to continue to install Sun Cluster software packages"
t!pe yes.888 Sotware :nstallation ;;;
:nstallation o the Sun Cluster ramewor9 sotwarepac9ages will onl5ta9e a ew minutes to complete.
:s it o9a5 to continue %5es/no& 65es7 yes
!! :nstalling SunCluster 3.0 !!S<=scr.....done
...it )T)> to continue'• Specif! the name of an! e(isting cluster node" referred to as the sponsoring
node.888 Sponsoring ode ;;;...=hat is the name o the sponsoring node node(
888 Cluster ame ;;;...=hat is the name o the cluster 5ou want to @oinclustername
888 Chec9 ;;;
This step runs scchec9%("& to veri5 that certainbasic hardware and sotware pre-conigurationreuirements have been met. : scchec9%("& detectspotential problems with coniguring this machine asa cluster node? a list o warnings is printed.
it )T)> to continue'
•
Specif! whether to use autodiscover! to configure the cluster transport.888 *utodiscover5 o Cluster Transport ;;;
: 5ou are using ethernet adapters as 5our clustertransport adapters? autodiscover5 is the best methodor coniguring the cluster transport.
4o 5ou want to use autodiscover5 %5es/no& 65es7...The ollowing connections were discovered'
node1:adapter switch node2:adapter node1:adapter switch node2:adapter
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 11/27
:s it o9a5 to add these connections to theconiguration %5es/no& 65es7
• Specif! whether this is a two-node cluster.888 #oint-to-#oint Cables ;;;
...:s this a two-node cluster %5es/no& 65es7
4oes this two-node cluster use transport @unctions%5es/no& 65es7
• ?id !ou specif! that the cluster will use transport @unctionsD 3f !es" specif!
the transport @unctions.888 Cluster Transport +unctions ;;;...=hat is the name o the irst @unction in thecluster 6switch(7
=hat is the name o the second @unction in thecluster 6switch27
• Specif! the first cluster interconnect transport adapter.888 Cluster Transport *dapters and Cables ;;;...=hat is the name o the irst cluster transportadapter %help&adapter
• Specif! what the first transport adapter connects to. 3f the transport adapter
uses a transport @unction" specif! the name of the @unction and its port.ame o the @unction to which AadapterA is connected
6switch(7...<se the deault port name or the AadapterAconnection %5es/no& 65es7
#R ame o adapter on Anode(A to which AadapterA isconnected adapter
• Specif! the second cluster interconnect transport adapter.
=hat is the name o the second cluster transportadapter %help&adapter
• Specif! what the second transport adapter connects to. 3f the transport adapter uses a transport @unction" specif! the name of the @unction and its port.ame o the @unction to which AadapterA is connected6switch27<se the deault port name or the AadapterAconnection %5es/no& 65es7
it )T)> to continue'
#R ame o adapter on Anode(A to which AadapterA isconnected adapter
• Specif! the gloal devices file s!stem name.888 Blobal 4evices ile S5stem ;;;
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 12/27
...The deault is to use /globaldevices.
:s it o9a5 to use this deault %5es/no& 65es7
• ?o !ou have an! Sun Cluster software patches to installD 3f not:-
888 *utomatic >eboot ;;;...4o 5ou want scinstall to reboot or 5ou %5es/no&65es7
888 Conirmation ;;;
Dour responses indicate the ollowing options toscinstall'
scinstall -i E
...*re these the options 5ou want to use %5es/no&65es74o 5ou want to continue with the install %5es/no&65es7
• 3nstall an! Sun Cluster software patches.
• Reoot the node to estalish the cluster unless !ou reooted the node after
!ou installed patches.
?o not reoot or shut down the first-installed node while an! other nodes are
eing installed" even if !ou use another node in the cluster as the sponsoringnode. ;ntil 7uorum votes are assigned to the cluster nodes and cluster install
mode is disaled" the first-installed node" which estalished the cluster" is the
onl! node that has a 7uorum vote. 3f the cluster is still in install mode" !ou will
cause a s!stem panic ecause of lost 7uorum if !ou reoot or shut down the
first-installed node. Cluster nodes remain in install mode until the first time
!ou run the scsetup$0*% command" during the procedure 5ost3nstallation
Configuration.
4"8 ost Installation Configuration5ost-installation can include a numer of tasks such as installing a volume manager
and or dataase software. There are other tasks that must e completed first.• taking the cluster nodes out of install mode
• defining 7uorum disks
>efore a new cluster can operate normall!" the install mode attriute must e reset on
all nodes. ou can this in a single step using the scsetup utilit!. This utilit! is a
menu-driven interface that prompts for 7uorum device information the first time it is
run on a new cluster installation. #nce the 7uorum device is defined" the install mode
attriute is reset on all nodes. ;se thesccon command as follows to disale or
enale install mode:-
• sccon -c - reset $reset install mode%
• sccon -c - installmode $enale install mode%
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 13/27
/usr/cluster/in/scsetup888 :nitial Cluster Setup ;;;This program has detected that the cluster AinstallmodeAattribute is set ...
#lease do not proceed i an5 additional nodes have 5et to@oin the cluster.
:s it o9a5 to continue %5es/no& 65es7 yes
=hich global device do 5ou want to use %d;8& dx
:s it o9a5 to proceed with the update %5es/no& 65es7 yessccon -a - globaldevFdx 4o 5ou want to add another uorum dis9 %5es/no& no:s it o9a5 to reset AinstallmodeA %5es/no& 65es7 yes
sccon -c - resetCluster initialiGation is complete.
Although it appears that the scsetup utilit! uses two simple sccon commands to
define the 7uorum device and reset install mode" the process is more comple(.
The scsetup utilit! performs numerous verification checks for !ou. 3t is
recommended that !ou do not use sccon manuall! to perform these functions.
4"9 ostInstallation 5erificationhen !ou have completed the Sun Cluster software installation on all nodes" verif!
the following information:
• ?3? device configuration
• <eneral CCR configuration information
ach attached s!stem sees the same ?3? devices ut might use a different logical path
to access them. ou can verif! the ?3? device configuration with
the scdidadm command the following scdidadm output demonstrates how a
?3? device can have a different logical path from each connected node. scdidadm -!The list on each node should e the same. #utput resemles the following.( ph5s-schost-('/dev/rds9/c0t0d0 /dev/did/rds9/d(2 ph5s-schost-('/dev/rds9/c(t(d0 /dev/did/rds9/d22 ph5s-schost-2'/dev/rds9/c(t(d0 /dev/did/rds9/d23 ph5s-schost-('/dev/rds9/c(t2d0 /dev/did/rds9/d3
3 ph5s-schost-2'/dev/rds9/c(t2d0 /dev/did/rds9/d3...
The scstat utilit! displa!s the current status of various cluster components. ou
can use it to displa! the following information:
• the cluster name and node names
• names and status of cluster memers
• status of resource groups and related resources
• cluster interconnect status
The following scstat command option displa!s the cluster memership and
7uorum vote information.
/usr/cluster/in/scstat -"
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 14/27
Cluster configuration information is stored in the CCR on each node. ou should
verif! that the asic CCR values are correct. The sccon -p command displa!s
general cluster information along with detailed information aout each node in the
cluster.# /usr/cluster/in/sccon$ -p
4": Basic Cluster A$%inistrationC#ec.ing Status 0sing t#e scstat Co%%an$
ithout an! options" the scstat command displa!s general information for all
cluster nodes. ou can use options to restrict the status information to a particular t!pe
of information and/or to a particular node.
The following command displa!s the cluster transport status for a single node with
gigait ethernet:-$ /usr/cluster/in/scstat -% -h <node1> -- Cluster Transport #aths --)ndpoint )ndpoint Status-------- -------- ------Transport path' ;node28'ge( ;node(8'ge( #ath onlineTransport path' ;node28'ge0 ;node(8'ge0 #ath online
C#ec.ing Status 0sing t#e scchec& Co%%an$
The scchec9 command verifies that all of the asic gloal device structure is correct
on all nodes. Run the scchec9command after installing and configuring a cluster" as
well as after performing an! administration procedures that might result in changes to
the devices" volume manager" or Sun Cluster configuration.ou can run the command without an! options or direct it to a single node. ou can
run it from an! active cluster memer. There is no output from the command unless
errors are encountered. T!pical scchec9 command variations follow $as root%:-
/usr/cluster/in/scchec& /usr/cluster/in/scchec& -h <node 1>
C#ec.ing Status 0sing t#e scinstall Co%%an$
?uring the Sun Cluster software installation" the scinstall utilit! is copied to
the /usr/cluster/bin director!. ou can run the scinstall utilit! with
options that displa! the Sun Cluster revision and/or the names and revision of
installed packages. The displa!ed information is for the local node onl!. At!pical scinstall status output follows:-
$ /usr/cluster/in/scinstall -p'SunCluster 3.0S<=scr' 3.0.0?>)HF2000.(0.0(.0(.00S<=scdev' 3.0.0?>)HF2000.(0.0(.0(.00S<=scu' 3.0.0?>)HF2000.(0.0(.0(.00S<=scman' 3.0.0?>)HF2000.(0.0(.0(.00S<=scsal' 3.0.0?>)HF2000.(0.0(.0(.00S<=scsam' 3.0.0?>)HF2000.(0.0(.0(.00S<=scvm' 3.0.0?>)HF2000.(0.0(.0(.00S<=mdm' ,.2.(?>)HF2000.0I.0I.(0.0(
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 15/27
Starting / Stopping Cluster 7o$es
The Sun Cluster software starts automaticall! during a s!stem oot operation. ;se
the init command to shut down a single node. ou use
the scshutdown command to shut down all nodes in the cluster.
>efore shutting down a node" !ou should switch resource groups to the ne(t preferred
node and then run init 0 on the node
ou can shut down the entire cluster with the scshutdown command from an!
active cluster node. A t!pical cluster shutdown e(ample follows:- /usr/cluster/in/scshutdo(n -y -) 30Jroadcast "essage rom root on ;node (8 ...The cluster ;cluster8 will be shutdown in 30 seconds....The s5stem is down.s5ncing ile s5stems... done#rogram terminatedo9
;og (iles for Sun Cluster
The log files for Sun Cluster are stored
in /var/cluster/logs and /var/cluster/logs/install for
installation. >oth Solaris #perating S!stem $S5ARC% and Sun Cluster software write
error messages to the /var/adm/messages file" which over time can fill the /var file
s!stem. 3f a cluster node)s /var file s!stem fills up" Sun Cluster might not e ale
to restart on that node. Additionall!" !ou might not e ale to log in to the node.
4"< Installing a 5olu%e Manager3t is now necessar! to install volume management software. Sun Cluster v& supports
two products:-
• Sun)s Solaris 1olume *anager software $Solstice ?iskSuite software% or
• 1R3TAS 1olume *anager $1(1*% software v&..4E $&,-it RAC%" v&.0.0E
$84-it RAC% is needed to provide shared disk access and distriuted
mirroring.
Although Sun)s Solstice ?iskSuite $S?S%" integrated into Solaris #perating S!stem
$S5ARC% + onwards as Solaris 1olume *anager $S1*%" is supported ! Sun Cluster
v&" neither S?S or S1* supports cluster wide volume management" it is onl! for per
node asis. 6ence these products cannot e used for RAC.
"= reparing for t#e installation of RACThe Real Application Clusters installation process includes four ma@or tasks.
0. 3nstall the operating s!stem-dependent $#S?% clusterware.,. Configure the shared disks and ;93= preinstallation tasks.
&. Run the #racle ;niversal 3nstaller to install the #racle$i nterprisedition and the #racle$i Real Application Clusters software.
4. Create and configure !our dataase."! Install t#e operating syste%-$epen$ent 1OS32 clusterware
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 16/27
ou must configure RAC to use the shared disk architecture of Sun Cluster. 3n this
configuration" a single dataase is shared among multiple instances of RAC that
access the dataase concurrentl!. Conflicting access to the same data is controlled !
means of the #racle ;93= ?istriuted Fock *anager $;?F*%. 3f a process or a node
crashes" the ;?F* is reconfigured to recover from the failure. 3n the event of a node
failure in an RAC environment" !ou can configure #racle clients to reconnect to thesurviving server without the use of the 35 failover used ! Sun Cluster failover data
services.
The Sun Cluster install C?)s contain the re7uired SC udlm package:-
5ackage S<=udlm Sun Cluster Support for #racle 5arallel Server ;?F*" $opt% on
SunCluster v&
To install use the p9gadd command:-
p&)add -d . S*+%udlm #nce installed" #racle)s interface with this" the #racle ;?F*" can e installed.
To install Sun Cluster Support for RAC with 1(1*" the following Sun Cluster &
Agents data services packages need to e installed as superuser $see Sun)s Sun Cluster
& ?ata Services 3nstallation and Configuration <uide%:- p&)add -d . S*+%scucm S*+%udlmr S*+%c'mrS*+%c'm %S<=udlm will also need to e included unless alread! installed from
the step aove%.
>efore reooting the nodes" !ou must ensure that !ou have correctl! installed and
configured the #racle ;?F* software.
The #racle ;ni( ?istriuted Fock *anager $#RCFudlm also known as the #racle
9ode *onitor% must e installed. This ma! e referred to in the #racle documentation
as the G5arallel Server 5atchG. To check version information on an! previousl!
installed dlm package:
$ p&)in$o -l ,C!udlm )rep ST
>
$ p&)in$o -l ,C!udlm )rep S4,+ou must appl! the following steps to all cluster nodes. The #racle udlm can e found
on ?isk0 of the #racle$i server installation C?-R#*" in the
director! opspatch or racpatch in later versions. A version of the #racle udlm
ma! also e found on the Sun Cluster C? set ut check the #racle release for the
latest applicale version. The informational
files >)*4").udlm H release_notes.33,x are located in this director! with
version and install information. This is the #racle udlm package for 2.=.= or later onSolaris #perating S!stem $S5ARC% and re7uires an! previous versions to e removed
prior to installation.
• Shutdown all e(isting clients of #racle ;ni( ?istriuted Fock *anager
$including all #racle 5arallel Server/RAC instances%.
• >ecome super user.
• Reoot the cluster node in non-cluster mode $replace Inode nameJ with !our
cluster node name%:- scs(itch -S -h <node name> shutdo(n -) 0 -y
... wait for the o9 prompto9 oot -5
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 17/27
• ;npack the file #RCFudlm.tar.K into a director!:cd <CD-ROM mount> /opspatch 67or racpatch in later'ersions8cp ,C!udlm.tar.9 /tmpcd /tmpuncompress ,C!udlm.tar.9
tar 5'$ ,C!udlm.tar• 3nstall the patch ! adding the package as root:
cd /tmp p&)add -d . ,C!udlm
The udlm configuration files in SC,.= and SC&. are the following:
SC,.=: /etc/opt/S<=cluster/con/;deault_cluster_name8.ora_cdb
SC&.: /etc/opt/S<=cluster/con/udlm.con
The udlm log files in SC,.= and SC&. are the following:
SC,.=: /var/opt/S<=cluster/dlm_;node_name8/logs/dlm.log
SC&.: /var/cluster/ucmm/dlm_;node_name8/logs/dlm.log
p9gadd will cop! a template file" IconfigurationLfileLnameJ.template"to /etc/opt/S<=cluster/con .
• 9ow that udlm $also referred to as the GCluster *emership *onitorG% is
installed" !ou can start it up ! reooting the cluster node in cluster mode:- shutdo(n -) 0 -y -i :
"4 Configure t#e s#are$ $is.s an$ 07I> preinstallation tas.s
"4"! Configure t#e s#are$ $is.sReal Application Clusters re7uires that all each instance e ale to access a set of
unformatted devices on a shared disk sus!stem. These shared disks are also referred
to as raw devices. 3f !our platform supports an #racle-certified cluster file s!stem"
however" !ou can store the files that Real Application Clusters re7uires directl! on thecluster file s!stem.
The #racle instances in Real Application Clusters write data onto the raw devices to
update the control file" server parameter file" each datafile" and each redo log file. All
instances in the cluster share these files.
The #racle instances in the RAC configuration write information to raw devices
defined for:
• The control file
• The spfile.ora
• ach datafile
• ach #9F39 redo log file
• Server *anager $SR1*% configuration information
3t is therefore necessar! to define raw devices for each of these categories of file. This
normall! means striping data across a large numer of disks in a RA3? E0
configuration.
The #racle ?ataase Configuration Assistant $?>CA% will create a seed dataase
e(pecting the following configuration:-
Raw 5olu%e(ile
SieSa%ple (ile 7a%e
SST* talespace 4 * db_name_raw_s5stem_,00m
;SRS talespace 0, * db_name_raw_users_(20m
T*5 talespace 0 * db_name_raw_temp_(00m
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 18/27
;9?#T>S talespace per
instance&0, * db_name_raw_undotbsx _3(2m
C*F3T talespace 0 * db_name_raw_cwmlite_(00m
=A*5F 08 * db_name_raw_eKample_(L0m
#*R5# , * db_name_raw_oemrepo_20m
39?= talespace 2 * db_name_raw_indK_M0m
T##FS talespace 0, * db_name_raw_tools_(2m
?RSS talespace + * db_name_raw_drs5s_N0m
'irst control file 00 * db_name_raw_controlile(_((0m
Second control file 00 * db_name_raw_controlile2_((0m
Two #9F39 redo log files
per instance
0, *
( , db_name_thread_lognumber _(20m
spfile.ora * db_name_raw_spile_m
srvmconfig 0 * db_name_raw_srvmcon_(00m
9ote: Automatic ;ndo *anagement re7uires an undo talespace per instancetherefore !ou would re7uire a minimum of , talespaces as descried aove. >!
following the naming convention descried in the tale aove" raw partitions are
identified with the dataase and the raw volume t!pe $the data contained in the raw
volume%. Raw volume sie is also identified using this method.
9ote: 3n the sample names listed in the tale" the string db%na#e should e replaced
with the actual dataase name" tread is the thread numer of the instance"
and lo!nu#ber is the log numer within a thread.
#n the node from which !ou run the #racle ;niversal 3nstaller" create an ASC33 file
identif!ing the raw volume o@ects as shown aove. The ?>CA re7uires that these
o@ects e(ist during installation and dataase creation. hen creating the ASC33 file
content for the o@ects" name them using the format:database%object&raw%de'ice%file%pat
hen !ou create the ASC33 file" separate the dataase o@ects from the paths with
e7uals $M% signs as shown in the e(ample elow:-s5stemF/dev/vK/rds9/oracle_dg/db_name_raw_s5stem_,00mspileF/dev/vK/rds9/oracle_dg/db_name_raw_spile_musersF/dev/vK/rds9/oracle_dg/db_name_raw_users_(20mtempF/dev/vK/rds9/oracle_dg/db_name_raw_temp_(00mundotbs(F/dev/vK/rds9/oracle_dg/db_name_raw_undotbs(_3(2mundotbs2F/dev/vK/rds9/oracle_dg/db_name_raw_undotbs2_3(2meKampleF/dev/vK/rds9/oracle_dg/db_name_raw_eKample_(L0m
cwmliteF/dev/vK/rds9/oracle_dg/db_name_raw_cwmlite_(00mindKF/dev/vK/rds9/oracle_dg/db_name_raw_indK_M0mtoolsF/dev/vK/rds9/oracle_dg/db_name_raw_tools_(2mdrs5sF/dev/vK/rds9/oracle_dg/db_name_raw_drs5s_N0mcontrol(F/dev/vK/rds9/oracle_dg/db_name_raw_controlile(_((0mcontrol2F/dev/vK/rds9/oracle_dg/db_name_raw_controlile2_((0mredo(_(F/dev/vK/rds9/oracle_dg/db_name_raw_log((_(20mredo(_2F/dev/vK/rds9/oracle_dg/db_name_raw_log(2_(20mredo2_(F/dev/vK/rds9/oracle_dg/db_name_raw_log2(_(20m
redo2_2F/dev/vK/rds9/oracle_dg/db_name_raw_log22_(20m
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 19/27
ou must specif! that #racle should use this file to determine the raw device volume
names ! setting the following environment variale where filename is the name of
the ASC33 file that contains the entries shown in the e(ample aove:setenv 4JC*_>*=_C:B filenameor
export 4JC*_>*=_C:B=filename
"4"4 07I> reinstallation StepsAfter configuring the raw volumes" perform the following steps prior to installation as
root user:
A$$ t#e Oracle 0S6R
• *ake sure !ou have an osdba group defined in the /etc/group file on all
nodes of !our cluster. To designate an osdba group name and group numer
and osoper group during installation" these group names must e identical
on all nodes of !our ;93= cluster that will e part of the Real Application
Clusters dataase. The default ;93= group name for the osda and osoper
groups is dba. A t!pical entr! would therefore look like the following:
da;;101;oracleoinstall;;102;rootoracle
• Create an oracle account on each node so that the account:
• 3s a memer of the osdba group
• 3s used onl! to install and update #racle software
• 6as write permissions on remote directories
A t!pical command would look like the following:6 useradd -c =,racle so$t(are o(ner= - daoinstall -u 101 -m -d /e5port/home/oracle -s/in/&sh oracle
• Create a mount point director! on each node to serve as the top of !our
#racle software director! structure so that:
• The name of the mount point on each node is identical to that on the
initial node
• The oracle account has read" write" and e(ecute privileges
• #n the node from which !ou will run the #racle ;niversal 3nstaller" set up
user e7uivalence ! adding entries for all nodes in the cluster" including the
local node" to the .rhosts file of the oracle account" or
the/etc/hosts.euiv file.• As oracle account user" check for user e7uivalence for the oracle account !
performing a remote login $rlogin% to each node in the cluster.
• As oracle account user" if !ou are prompted for a password" !ou have not
given the oracle account the same attriutes on all nodes. ou must correct
this ecause the #racle ;niversal 3nstaller cannot use the rcp command to
cop! #racle products to the remote node)s directories without user
e7uivalence.
Syste% @ernel ara%eters
1erif! operating s!stem kernel parameters are set to appropriate levels: The
file /etc/s5stem is read ! the operating s!stem kernel at oot time. Check thisfile for appropriate values for the following parameters.
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 20/27
@ernel ara%eter Setting urpose
S6**A= 4,+4+82,+*a(imum allowale sie of one shared
memor! segment $4 <%
S6**39 0*inimum allowale sie of a single shared
memor! segment.
S6**93 0*a(imum numer of shared memor! segments
in the entire s!stem.
S6*S< 0*a(imum numer of shared memor! segments
one process can attach.
S**93 0,4*a(imum numer of semaphore sets in the
entire s!stem.
S**SF 0
*inimum recommended value. S**SF
should e 0 plus the largest 5R#CSSS
parameter of an! #racle dataase on the
s!stem.
S**9S 0,4
*a(imum semaphores on the s!stem. Thissetting is a minimum recommended value.
S**9S should e set to the sum of the
5R#CSSS parameter for each #racle
dataase" add the largest one twice" plus add an
additional 0 for each dataase.
S*#5* 0 *a(imum numer of operations per semop call.
S*1*= &,282 *a(imum value of a semaphore.
$swap space% 2 *>Two to four times !our s!stem)s ph!sical
memor! sie.
6stablis# syste% environ%ent variables• Set a local in director! in the user)s 5AT6" such as /usr/local/bin"
or /opt/bin. 3t is necessar! to have e(ecute permissions on this director!.
• Set the 4:S#O*D variale to point to the s!stem)s $from where !ou will run
#;3% 35 address" or name" = server" and screen.
• Set a temporar! director! path for T*5?3R with at least , * of free space
to which the #;3 has write permission.
6stablis# Oracle environ%ent variables: Set the following #racle environment
variales:
6nviron%ent 5ariable Suggeste$ value
#RACFL>AS eg /u0(/app/oracle#RACFL6#* eg /u0(/app/oracle/product/N20(
#RACFLTR* (term
9FSLFA9< A*R3CA9-A*R3CA.;T' for e(ample
#RAL9FS&& $>*CO)_")/ocommon/nls/admin/data
5AT6 Should contain $>*CO)_")/bin
CFASS5AT6
$>*CO)_")/+>)'$>*CO)_")/@lib E$>*CO)_")/rdbms/@lib' E$>*CO)_")/networ9/@lib
• Create the director! /var/opt/oracle and set ownership to the oracle
user.
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 21/27
• 1erif! the e(istence of the file /opt/S<=cluster/bin/l9mgr. This
is used ! the #;3 to indicate that the installation is eing performed on a
cluster.
9ote: There is a script 3nstall5rep.sh availale which ma! e downloaded and run
prior to the installation of #racle Real Application Clusters. This script verifies that
the s!stem is configured correctl! according to the 3nstallation <uide. The output ofthe script will report an! further tasks that need to e performed efore successfull!
installing #racle $"(?ataServer $R?>*S%. This script performs the following
verifications:-
• >*CO)_") ?irector! 1erification
• ;93= ;ser/umask 1erification
• ;93= <roup 1erification
• *emor!/Swap 1erification
• T*5 Space 1erification
• Real Application Cluster #ption 1erification
• ;ni( Nernel 1erification. ./4nstallrep.shDou are currentl5 logged on as oracle:s oracle the uniK user that will be installing racleSotware 5 or ny)nter the uniK group that will be used during theinstallation4eault' dbada)nter Oocation where 5ou will be installing racle4eault' /u0(/app/oracle/product/oracleNi/u01/app/oracle/product/?.2.0.1Dour perating S5stem is SunSBathering inormation... #lease waitChec9ing uniK user ...user test passedChec9ing uniK umas9 ...umas9 test passedChec9ing uniK group ...<niK Broup test passedChec9ing "emor5 P Swap..."emor5 test passed/tmp test passedChec9ing or a cluster...SunS Cluster test3.K has been detectedCluster has been detectedDou have 2 cluster members conigured and 2 are curentl5upo cluster warnings detected#rocessing 9ernel parameters... #lease wait>unning Qernel #arameter >eport...Chec9 the report or Qernel parameter veriication
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 22/27
Completed./tmp/racle_:nstall#rep_>eport has been generated#lease review this report and resolve all issues beoreattempting to install the racle 4atabase Sotware
" 0sing t#e Oracle 0niversal Installer for Real Application Clusters'ollow these procedures to use the #racle ;niversal 3nstaller to install the #racle
nterprise dition and the Real Application Clusters software. #racle$i is supplied on
multiple C?-R#* disks and the Real Application Clusters software is part of this
distriution. ?uring the installation process it is necessar! to switch etween the C?-
R#*S. #;3 will manage the switching etween C?s.
To install the #racle Software" perform the following:.
• Fogin as the oracle user
• # /<cdrom_mount_point> /run4nstaller
• At the #;3 elcome screen" click 7et.
• A prompt will appear for the 3nventor! Focation $if this is the first time that
#;3 has een run on this s!stem%. This is the ase director! into which #;3
will install files. The #racle 3nventor! definition can e found in the
file&var&opt&oracle&oraInst"loc. Click O@ .
• 1erif! the ;93= group name of the user who controls the installation of the
#racle$i software. 3f an instruction to
run /tmp/orainst>oot.sh appears" the pre-installation steps were not
completed successfull!. T!picall!" the /var/opt/oracle director! does
not e(ist or is not writeale ! oracle. Run /tmp/orainst>oot.sh to
correct this" forcing #racle 3nventor! files" and others" to e written to
the >*CO)_") director!. #nce again this screen onl! appears the first
time #racle$i products are installed on the s!stem. Click 7et.
• The 'ile Focation window will appear. 3o 7O change the Source field. The
?estination field defaults to the>*CO)_") environment variale.
Click 7et.
• Select the 5roducts to install. 3n this e(ample" select the Oracle9i Server then
click 7et.
• Select the installation t!pe. Choose the 6nterprise 6$ition option. The
selection on this screen refers to the installation operation" not the dataase
configuration. The ne(t screen allows for a customied dataase configuration
to e chosen. Click 7et.
• Select the configuration t!pe. 3n this e(ample !ou choose the Advanced
Configuration as this option provides a dataase that !ou can customie" and
configures the selected server products. Select Custo%ie$ and click 7et.
• Select the other nodes on to which the #racle R?>*S software will e
installed. 3t is not necessar! to select the node on which the #;3 is currentl!
running. Click 7et.
• 3dentif! the raw partition in to which the #racle$i Real Application Clusters
$RAC% configuration information will e written. 3t is recommended that this
raw partition is a minimum of 0*> in sie.
• An option to ;pgrade or *igrate an e(isting dataase is presented.
?o 7O select the radio utton. The #racle *igration utilit! is not ale toupgrade a RAC dataase" and will error if selected to do so.
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 23/27
• The Summar! screen will e presented. Confirm that the RAC dataase
software will e installed and then click Install. The #;3 will install the
#racle$i software on to the local node" and then cop! this information to the
other nodes selected.
• #nce 3nstall is selected" the #;3 will install the #racle RAC software on to
the local node" and then cop! software to the other nodes selected earlier. Thiswill take some time. ?uring the installation process" the #;3 does not displa!
messages indicating that components are eing installed on other nodes - 3/#
activit! ma! e the onl! indication that the process is continuing.
"8 Create a RAC 3atabase using t#e Oracle 3atabase Configuration
AssistantThe #racle ?ataase Configuration Assistant $?>CA% will create a dataase for !ou.
The ?>CA creates !our dataase using the optimal fle(ile architecture $#'A%. This
means the ?>CA creates !our dataase files" including the default server parameter
file" using standard file naming and file placement practices. The primar! phases of
?>CA processing are:-• 1erif! that !ou correctl! configured the shared disks for each talespace $for
non-cluster file s!stem platforms%
• Create the dataase
• Configure the #racle network services
• Start the dataase instances and listeners
#racle Corporation recommends that !ou use the ?>CA to create !our dataase. This
is ecause the ?>CA preconfigured dataases optimie !our environment to take
advantage of #racle$i features such as the server parameter file and automatic undo
management. The ?>CA also enales !ou to define aritrar! talespaces as part of
the dataase creation process. So even if !ou have datafile re7uirements that differfrom those offered in one of the ?>CA templates" use the ?>CA. ou can also
e(ecute user-specified scripts as part of the dataase creation process.
The ?>CA and the #racle 9et Configuration Assistant $9TCA% also accuratel!
configure !our Real Application Clusters environment for various #racle high
availailit! features and cluster administration tools.
9ote: 5rior to running the ?>CA it ma! e necessar! to run the 9TCA tool or to
manuall! set up !our network files. To run the 9TCA tool e(ecute the
command netca from the $>*CO)_")/bin director!. This will configure the
necessar! listener names and protocol addresses" client naming methods"
9et service names and ?irector! server usage. Also" it is recommended that the
<loal Services ?aemon $<S?% is started on all nodes prior to running ?>CA. To runthe <S? e(ecute the command )sd from the $>*CO)_")/bindirector!.
• ?>CA will launch as part of the installation process" ut can e run manuall!
! e(ecuting the command dcafrom the $>*CO)_")/bin director!
on ;93= platforms. The RAC elcome 5age displa!s. ChooseOracle
Cluster 3atabase option and select 7et.
• The #perations page is displa!ed. Choose the option Create a 3atabase and
click 7et.
• The 9ode Selection page appears. Select the nodes that !ou want to configure
as part of the RAC dataase and click 7et. 3f nodes are missing from the
9ode Selection then perform clusterware diagnostics ! e(ecutingthe$>*CO)_")/bin/lsnodes -v command and anal!ing its
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 24/27
output. Refer to !our vendor)s clusterware documentation if the output
indicates that !our clusterware is not properl! installed. Resolve the prolem
and then restart the ?>CA.
• The ?ataase Templates page is displa!ed. The templates other than 9ew
?ataase include datafiles. Choose 7ew 3atabase and then click 7et.
• The S#ow 3etails utton provides information on the dataase templateselected.
• ?>CA now displa!s the ?ataase 3dentification page. nter the Global
3atabase 7a%e and Oracle Syste% I$entifier 1SI32. The <loal ?ataase
9ame is t!picall! of the form na#e"do#ain" for
e(amplemydb.us.oracle.com while the S3? is used to uni7uel! identif! an
instance $?>CA should insert a suggested S3?" e7uivalent
to na#e1 where na#e was entered in the ?ataase 9ame field%. 3n the RAC
case the S3? specified will e used as a prefi( for the instance numer. 'or
e(ample" MYDB" would ecome MYDB1, MYDB2 for instance 0 and ,
respectivel!.• The ?ataase #ptions page is displa!ed. Select the options !ou wish to
configure and then choose 7et. 7ote: 3f !ou did not choose 9ew ?ataase
from the ?ataase Template page" !ou will not see this screen.
• The Additional dataase Configurations utton displa!s additional dataase
features. *ake sure oth are checked and click O@ .
• Select the connection options desired from the ?ataase Connection #ptions
page. 7ote: 3f !ou did not choose 9ew ?ataase from the ?ataase Template
page" !ou will not see this screen. Click 7et.
• ?>CA now displa!s the 3nitialiation 5arameters page. This page comprises a
numer of Ta fields. *odif! theMe%ory settings if desired and then select
the (ile ;ocations ta to update information on the 3nitialiation 5arameters
filename and location. Then click 7et.
• The option Create persistent initialiation para%eter file is selected !
default. 3f !ou have a cluster file s!stem" then enter a file syste% na%e"
otherwise a raw $evice na%e for the location of the server parameter file
$spfile% must e entered. Then click 7et.
• The utton (ile ;ocation 5ariablesO displa!s variale information.
Click O@ .
• The utton All Initialiation ara%etersO displa!s the 3nitialiation
5arameters dialog o(. This o( presents values for all initialiation
parameters and indicates whether the! are to e included in the spfile to ecreated through the check o(" included )*+N. 3nstance specific parameters
have an instance value in the instance column. Complete entries in the All
Initialiation ara%eters page and select Close. 7ote: There are a few
e(ceptions to what can e altered via this screen. nsure all entries in the
3nitialiation 5arameters page are complete and select7et.
• ?>CA now displa!s the 3atabase Storage indow. This page allows !ou to
enter file names for each talespace in !our dataase.
• The file names are displa!ed in the 3atafiles folder" ut are entered !
selecting the ablespaces icon" and then selecting the talespace o@ect from
the e(panded tree. An! names displa!ed here can e changed. A configuration
file can e used" $pointed to ! the environment
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 25/27
variale 4JC*_>*=_C:B%. Complete the dataase storage information
and click 7et.
• The 3atabase Creation Options page is displa!ed. nsure that the
option Create 3atabase is checked and click (inis#.
• The 3BCA Su%%ary window is displa!ed. Review this information and
then click O@ .
• #nce the Summar! screen is closed using the #N option" ?>CA egins to
create the dataase according to the values specified.
A new dataase now e(ists. 3t can e accessed via #racle SPFQ5F;S or other
applications designed to work with an #racle RAC dataase.
8"= A$%inistering Real Application Clusters Instances#racle Corporation recommends that !ou use SR1CTF to administer !our Real
Application Clusters dataase environment. SR1CTF manages configurationinformation that is used ! several #racle tools. 'or e(ample" #racle nterprise
*anager and the 3ntelligent Agent use the configuration information that SR1CTF
generates to discover and monitor nodes in !our cluster. >efore using SR1CTF"
ensure that !our <loal Services ?aemon $<S?% is running after !ou configure !our
dataase. To use SR1CTF" !ou must have alread! created the configuration
information for the dataase that !ou want to administer. ou must have done this
either ! using the #racle ?ataase Configuration Assistant $?>CA%" or ! using
thesrvctl add command as descried elow.
3f this is the first #racle$i dataase created on this cluster" then !ou must initialie the
clusterwide SR1* configuration. 'irstl!" create or edit the
file /var/opt/oracle/srvConig.loc file and add theentr!srvconig_locF path_name.where the path name is a small cluster-shared
raw volume eg$ 'i /'ar/opt/oracle/sr'Con$i).locsrvconig_locF/dev/vK/rds9/datadg/rac_srvconig_(0m
Then e(ecute the following command to initialie this raw volume $9ote: This cannot
e run while the gsd is running. 5rior to $i Release , !ou will need to kill
the .../@re/(.(.I/bin/... process to stop the gsd from running.
'rom $i Release , use thegsdctl stop command%:-
$ sr'con$i) -init
The first time !ou use the SR1CTF ;tilit! to create the configuration" start the <loalServices ?aemon $<S?% on all nodes so that SR1CTF can access !our cluster)s
configuration information. Then e(ecute the srvctl add command so that Real
Application Clusters knows what instances elong to !our cluster using the following
s!nta(:-
or .racle / '$"0"1:-$ )sd Successull5 started the daemon on the local node.
$ sr'ctl add d -p db_name -o oracle_homeThen for each instance enter the command:
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 26/27
$ sr'ctl add instance -p db_name -i sid -n node
To displa! the configuration details for" e(ample" dataases racd0/," on nodes
racnode0/, with instances racinst0/, run:-
$ sr'ctl con$i)
racdb(racdb2$ sr'ctl con$i) -p racd1racnode( racinst(racnode2 racinst2
$ sr'ctl con$i) -p racd1 -n racnode1racnode( racinst(
(amples of starting and stopping RAC follow:-$ sr'ctl start -p racd1:nstance successull5 started on node' racnode2
Oisteners successull5 started on node' racnode2:nstance successull5 started on node' racnode(Oisteners successull5 started on node' racnode($ sr'ctl stop -p racd2:nstance successull5 stopped on node' racnode2:nstance successull5 stopped on node' racnode(Oistener successull5 stopped on node' racnode2Oistener successull5 stopped on node' racnode($ sr'ctl stop -p racd1 -i racinst2 -s inst:nstance successull5 stopped on node' racnode2$ sr'ctl stop -p racd1 -s inst
#>Q-203 ' :nstance is alread5 stopped on node' racnode2:nstance successull5 stopped on node' racnode(
or .racle / '$"2"0:-$ )sdctl startSuccessull5 started the daemon on the local node.
$ sr'ctl add dataase -d db_name -o oracle_home @- m domain_nameA @-sspfileA
Then for each instance enter the command:$ sr'ctl add instance -d db_name -i sid -n node
To displa! the configuration details for" e(ample" dataases racd0/," on nodes
racnode0/, with instances racinst0/, run:-
$ sr'ctl con$i)racdb(racdb2
$ sr'ctl con$i) -p racd1 -n racnode1racnode( racinst( /u0(/app/oracle/product/N.2.0.(
7/17/2019 RAC INstallation on SUN Cluster
http://slidepdf.com/reader/full/rac-installation-on-sun-cluster 27/27
$ sr'ctl status dataase -d racd1:nstance racinst( is running on node racnode(:nstance racinst2 is running on node racnode2
(amples of starting and stopping RAC follow:-$ sr'ctl start dataase -d racd2
$ sr'ctl stop dataase -d racd2$ sr'ctl stop instance -d racd1 -i racinst2$ sr'ctl start instance -d racd1 -i racinst2$ )sdctl statBS4 is running on local node$ )sdctl stop'or further information on srvctl and gsdctl see the #racle$i Real Application
Clusters Administration manual.