Upload
lymien
View
238
Download
0
Embed Size (px)
Citation preview
Technical white paper
Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4 Installation cookbook
Table of contents Executive summary 3
Introduction 3
Environment description 4
Hardware description 4
Software description 4
Documentation 5
Useful My Oracle Support Notes 5
Infrastructure description 5
Environment pre-requisites 5
Oracle Grid Infrastructure installation server checklist 5
HP BladeSystem 6
HP Virtual Connect 7
HP Onboard Administrator 10
Connectivity 11
System pre-requisites 11
Memory requirement 11
Check the temporary space available 12
Check for the kernel release 12
Install the HP Service Pack for ProLiant and its RHEL 64 supplement 13
Check for the newly presented shared LUNs 15
Set the kernel parameters 16
Check the necessary packages 17
Checking shared memory file system mount 18
Preparing the network 19
Setting Network Time Protocol for Cluster Time Synchronization 21
Check the SELinux setting 23
Create the grid and oracle users and groups 24
Configure the secure shell service 24
Set the limits 26
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
2
Installing the cvuqdisk RPM for Linux 26
Storage connectivity driver configuration 26
Install the ASMLib support library 32
Check the available disk space 34
Setting the disk IO scheduler on Linux 34
Determining root script execution plan 35
Oracle Clusterware installation 36
Environment setting 36
Check the environment before installation 36
RunInstaller 36
Check the installation 42
ASM disk group creation 46
Oracle RAC 12c database installation 49
Environment setting 49
Installation 49
Create a RAC database 53
Create a RAC database 53
Post-installation steps 57
Cluster verification 60
Cluster verification utility 60
Appendix 62
Anaconda file 62
Grid user environment setting 63
Oracle user environment setting 64
Summary 65
For more information 66
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
3
Executive summary
On July 1 2013 Oracle announced general availability of Oracle Database 12c designed for the Cloud New features such as Oracle Multitenant for consolidating multiple databases and Automatic Data Optimization for compressing and tiering data at a higher density resource efficiency and flexibility along with many other enhancements will be important for many customers to understand and implement as new applications take advantage of these features
Globally HP continues to be the leader of installed servers running Oracle Wersquore going to extend our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP continues to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP has tested various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle help the worldrsquos leading businesses succeed and wersquove accumulated a great deal of experience along the way We plan to leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated
This document provides a step by step installation description of Oracle RAC 12c running Red Hat Enterprise Linux on HP ProLiant servers and HP 3PAR StoreServ storage
This paper doesnrsquot intend to replace the official Oracle documentation It is validation and experience sharing with focus on the system pre-requisites and is complementary to any generic documentation of HP Oracle and Red Hat
Oracle Real Application Clusters (RAC) enables multiple cluster nodes to act as a single processing engine wherein any node can respond to a database request Servers in a RAC deployment are bound together using Oracle Clusterware (CRS) cluster management software Oracle Clusterware enables the servers to work together as a single entity
Target audience This document addresses the RAC 12c installation procedures The readers should have a good knowledge of Linux administration as well as knowledge of Oracle databases
This white paper describes testing performed in July 2013
Introduction
HP Converged Infrastructure delivers the framework for a dynamic data center eliminating costly rigid IT silos while unlocking resources for innovation rather than management This infrastructure matches the supply of IT resources with the demand for business applications its overarching benefits include the following
bull Modularity
bull Openness
bull Virtualization
bull Resilience
bull Orchestration
By transitioning away from a product-centric approach to a shared-service management model HP Converged Infrastructure can accelerate standardization reduce operating costs and accelerate business results
A dynamic Oracle business requires a matching IT infrastructure You need a data center with the flexibility to automatically add processing power to accommodate spikes in Oracle database traffic and the agility to shift resources from one application to another as demand changes To become truly dynamic you must start thinking beyond server virtualization and consider the benefits of virtualizing your entire infrastructure Thus virtualization is a key component of HP Converged Infrastructure
HPrsquos focus on cloud and virtualization is a perfect match for the new features of the Oracle 12c database designed for the cloud For customers who are eager to start building cloud solutions they can start with the HP integrated solutions For customers who want to move at a slower pace they can get started with Converged Infrastructure building blocks that put them on the path towards integrated solutions at a later date
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
4
Environment description
Drive business innovation and eliminate server sprawl with HP BladeSystem the industryrsquos only Converged Infrastructure architected for any workload from client to cloud HP BladeSystem is engineered to maximize every hour watt and dollar saving up to 56 total cost of ownership over traditional infrastructures
With HP BladeSystem it is possible to create a change ready power efficient network optimized simple to manage and high performance infrastructure on which to consolidate build and scale your Oracle database implementation Each HP BladeSystem c7000 enclosure can accommodate up to 16 half-height blades or up to 8 full-height blades or a mixture of both In addition there are 8 Interconnect Bays with support for any IO fabric your database applications requires
Choosing a server for Oracle databases involves selecting a server that is the right mix of performance price and power efficiency with the most optimal management HPrsquos experience during test and in production has been that 2 or 4 socket servers are ideal platforms for Oracle database for the cloud depending on the workload requirements 2 and 4 socket systems offer better memory performance and memory scaling for databases that need large System Global Area (SGA) HP BladeSystem reduces costs and simplifies management through shared infrastructure
The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply efficiently and flexibly The arrays feature a tightly coupled clustered architecture secure multi-tenancy and mixed workload support for enterprise-class data centers Use of unique thin technologies reduces acquisition and operational costs by up to 50 while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions The HP 3PAR StoreServ Gen4 ASIC in each of the systemrsquos controller nodes provides a hyper-efficient silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels
Hardware description
For this white paper we rely on 2 main components of the HP Converged Infrastructure introduced earlier two HP ProLiant BL460c Gen8 servers and an HP 3PAR StoreServ 7200 as SAN storage as shown in figure 1
Figure 1 The c7000 and the 3PAR StoreServ 7200 used during this cookbook preparation This view is a subset of the fully populated enclosure
Software description
bull Red Hat Enterprise Linux Server release 64
bull Oracle Database 12cR1 Real Application Cluster
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5
Documentation
The table below lists the main documentation used during the creation of this white paper
Document ID Document title
E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (121) for Linux
E17720-15 Oracle Database Installation Guide 12c Release 1 (121) for Linux
Useful My Oracle Support Notes
Note ID Title
10893991 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
15140121 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6
15671271 RHEL 6 12c CVU Fails Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description
Multiple configurations are possible in order to build a RAC cluster This chapter will only deliver some information about the architecture we worked with during this project For the configuration tested two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used
In this section we will also look at the HP ProLiant blade infrastructure
Environment pre-requisites
Based on this architecture the adaptive infrastructure requirements for an Oracle RAC are
bull High speed communication link (ldquoprivaterdquo Virtual Connect network) between all nodes of the cluster (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches)
bull A common ldquopublicrdquo (Virtual Connect) communication link for communication with Oracle clients
bull The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting OCR and Database files)
bull At least two HP servers are required In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem
Oracle Grid Infrastructure installation server checklist
Network switches
bull Public network switch at least 1 GbE connected to a public gateway
bull Private network switch at least 1 GbE dedicated for use only with other cluster member nodes The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCPIP
Runlevel Servers should be either in runlevel 3 or runlevel 5
Random Access Memory (RAM) At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation including installations where you plan to install Oracle RAC
Temporary disk space allocation At least 1 GB allocated to tmp
Operating system
bull Supported in the list of supported kernels and releases listed in httpdocsoraclecomcdE16655_01install121e17888prelinuxhtmCIHFICFD
bull In our configuration Red Hat Enterprise Linux 6 Supported distributions
ndash Red Hat Enterprise Linux 6 2632-71el6x86_64 or later
ndash Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel 2632-100285el6x86_64 or later
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
2
Installing the cvuqdisk RPM for Linux 26
Storage connectivity driver configuration 26
Install the ASMLib support library 32
Check the available disk space 34
Setting the disk IO scheduler on Linux 34
Determining root script execution plan 35
Oracle Clusterware installation 36
Environment setting 36
Check the environment before installation 36
RunInstaller 36
Check the installation 42
ASM disk group creation 46
Oracle RAC 12c database installation 49
Environment setting 49
Installation 49
Create a RAC database 53
Create a RAC database 53
Post-installation steps 57
Cluster verification 60
Cluster verification utility 60
Appendix 62
Anaconda file 62
Grid user environment setting 63
Oracle user environment setting 64
Summary 65
For more information 66
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
3
Executive summary
On July 1 2013 Oracle announced general availability of Oracle Database 12c designed for the Cloud New features such as Oracle Multitenant for consolidating multiple databases and Automatic Data Optimization for compressing and tiering data at a higher density resource efficiency and flexibility along with many other enhancements will be important for many customers to understand and implement as new applications take advantage of these features
Globally HP continues to be the leader of installed servers running Oracle Wersquore going to extend our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP continues to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP has tested various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle help the worldrsquos leading businesses succeed and wersquove accumulated a great deal of experience along the way We plan to leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated
This document provides a step by step installation description of Oracle RAC 12c running Red Hat Enterprise Linux on HP ProLiant servers and HP 3PAR StoreServ storage
This paper doesnrsquot intend to replace the official Oracle documentation It is validation and experience sharing with focus on the system pre-requisites and is complementary to any generic documentation of HP Oracle and Red Hat
Oracle Real Application Clusters (RAC) enables multiple cluster nodes to act as a single processing engine wherein any node can respond to a database request Servers in a RAC deployment are bound together using Oracle Clusterware (CRS) cluster management software Oracle Clusterware enables the servers to work together as a single entity
Target audience This document addresses the RAC 12c installation procedures The readers should have a good knowledge of Linux administration as well as knowledge of Oracle databases
This white paper describes testing performed in July 2013
Introduction
HP Converged Infrastructure delivers the framework for a dynamic data center eliminating costly rigid IT silos while unlocking resources for innovation rather than management This infrastructure matches the supply of IT resources with the demand for business applications its overarching benefits include the following
bull Modularity
bull Openness
bull Virtualization
bull Resilience
bull Orchestration
By transitioning away from a product-centric approach to a shared-service management model HP Converged Infrastructure can accelerate standardization reduce operating costs and accelerate business results
A dynamic Oracle business requires a matching IT infrastructure You need a data center with the flexibility to automatically add processing power to accommodate spikes in Oracle database traffic and the agility to shift resources from one application to another as demand changes To become truly dynamic you must start thinking beyond server virtualization and consider the benefits of virtualizing your entire infrastructure Thus virtualization is a key component of HP Converged Infrastructure
HPrsquos focus on cloud and virtualization is a perfect match for the new features of the Oracle 12c database designed for the cloud For customers who are eager to start building cloud solutions they can start with the HP integrated solutions For customers who want to move at a slower pace they can get started with Converged Infrastructure building blocks that put them on the path towards integrated solutions at a later date
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
4
Environment description
Drive business innovation and eliminate server sprawl with HP BladeSystem the industryrsquos only Converged Infrastructure architected for any workload from client to cloud HP BladeSystem is engineered to maximize every hour watt and dollar saving up to 56 total cost of ownership over traditional infrastructures
With HP BladeSystem it is possible to create a change ready power efficient network optimized simple to manage and high performance infrastructure on which to consolidate build and scale your Oracle database implementation Each HP BladeSystem c7000 enclosure can accommodate up to 16 half-height blades or up to 8 full-height blades or a mixture of both In addition there are 8 Interconnect Bays with support for any IO fabric your database applications requires
Choosing a server for Oracle databases involves selecting a server that is the right mix of performance price and power efficiency with the most optimal management HPrsquos experience during test and in production has been that 2 or 4 socket servers are ideal platforms for Oracle database for the cloud depending on the workload requirements 2 and 4 socket systems offer better memory performance and memory scaling for databases that need large System Global Area (SGA) HP BladeSystem reduces costs and simplifies management through shared infrastructure
The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply efficiently and flexibly The arrays feature a tightly coupled clustered architecture secure multi-tenancy and mixed workload support for enterprise-class data centers Use of unique thin technologies reduces acquisition and operational costs by up to 50 while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions The HP 3PAR StoreServ Gen4 ASIC in each of the systemrsquos controller nodes provides a hyper-efficient silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels
Hardware description
For this white paper we rely on 2 main components of the HP Converged Infrastructure introduced earlier two HP ProLiant BL460c Gen8 servers and an HP 3PAR StoreServ 7200 as SAN storage as shown in figure 1
Figure 1 The c7000 and the 3PAR StoreServ 7200 used during this cookbook preparation This view is a subset of the fully populated enclosure
Software description
bull Red Hat Enterprise Linux Server release 64
bull Oracle Database 12cR1 Real Application Cluster
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5
Documentation
The table below lists the main documentation used during the creation of this white paper
Document ID Document title
E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (121) for Linux
E17720-15 Oracle Database Installation Guide 12c Release 1 (121) for Linux
Useful My Oracle Support Notes
Note ID Title
10893991 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
15140121 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6
15671271 RHEL 6 12c CVU Fails Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description
Multiple configurations are possible in order to build a RAC cluster This chapter will only deliver some information about the architecture we worked with during this project For the configuration tested two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used
In this section we will also look at the HP ProLiant blade infrastructure
Environment pre-requisites
Based on this architecture the adaptive infrastructure requirements for an Oracle RAC are
bull High speed communication link (ldquoprivaterdquo Virtual Connect network) between all nodes of the cluster (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches)
bull A common ldquopublicrdquo (Virtual Connect) communication link for communication with Oracle clients
bull The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting OCR and Database files)
bull At least two HP servers are required In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem
Oracle Grid Infrastructure installation server checklist
Network switches
bull Public network switch at least 1 GbE connected to a public gateway
bull Private network switch at least 1 GbE dedicated for use only with other cluster member nodes The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCPIP
Runlevel Servers should be either in runlevel 3 or runlevel 5
Random Access Memory (RAM) At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation including installations where you plan to install Oracle RAC
Temporary disk space allocation At least 1 GB allocated to tmp
Operating system
bull Supported in the list of supported kernels and releases listed in httpdocsoraclecomcdE16655_01install121e17888prelinuxhtmCIHFICFD
bull In our configuration Red Hat Enterprise Linux 6 Supported distributions
ndash Red Hat Enterprise Linux 6 2632-71el6x86_64 or later
ndash Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel 2632-100285el6x86_64 or later
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
3
Executive summary
On July 1 2013 Oracle announced general availability of Oracle Database 12c designed for the Cloud New features such as Oracle Multitenant for consolidating multiple databases and Automatic Data Optimization for compressing and tiering data at a higher density resource efficiency and flexibility along with many other enhancements will be important for many customers to understand and implement as new applications take advantage of these features
Globally HP continues to be the leader of installed servers running Oracle Wersquore going to extend our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP continues to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP has tested various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle help the worldrsquos leading businesses succeed and wersquove accumulated a great deal of experience along the way We plan to leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated
This document provides a step by step installation description of Oracle RAC 12c running Red Hat Enterprise Linux on HP ProLiant servers and HP 3PAR StoreServ storage
This paper doesnrsquot intend to replace the official Oracle documentation It is validation and experience sharing with focus on the system pre-requisites and is complementary to any generic documentation of HP Oracle and Red Hat
Oracle Real Application Clusters (RAC) enables multiple cluster nodes to act as a single processing engine wherein any node can respond to a database request Servers in a RAC deployment are bound together using Oracle Clusterware (CRS) cluster management software Oracle Clusterware enables the servers to work together as a single entity
Target audience This document addresses the RAC 12c installation procedures The readers should have a good knowledge of Linux administration as well as knowledge of Oracle databases
This white paper describes testing performed in July 2013
Introduction
HP Converged Infrastructure delivers the framework for a dynamic data center eliminating costly rigid IT silos while unlocking resources for innovation rather than management This infrastructure matches the supply of IT resources with the demand for business applications its overarching benefits include the following
bull Modularity
bull Openness
bull Virtualization
bull Resilience
bull Orchestration
By transitioning away from a product-centric approach to a shared-service management model HP Converged Infrastructure can accelerate standardization reduce operating costs and accelerate business results
A dynamic Oracle business requires a matching IT infrastructure You need a data center with the flexibility to automatically add processing power to accommodate spikes in Oracle database traffic and the agility to shift resources from one application to another as demand changes To become truly dynamic you must start thinking beyond server virtualization and consider the benefits of virtualizing your entire infrastructure Thus virtualization is a key component of HP Converged Infrastructure
HPrsquos focus on cloud and virtualization is a perfect match for the new features of the Oracle 12c database designed for the cloud For customers who are eager to start building cloud solutions they can start with the HP integrated solutions For customers who want to move at a slower pace they can get started with Converged Infrastructure building blocks that put them on the path towards integrated solutions at a later date
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
4
Environment description
Drive business innovation and eliminate server sprawl with HP BladeSystem the industryrsquos only Converged Infrastructure architected for any workload from client to cloud HP BladeSystem is engineered to maximize every hour watt and dollar saving up to 56 total cost of ownership over traditional infrastructures
With HP BladeSystem it is possible to create a change ready power efficient network optimized simple to manage and high performance infrastructure on which to consolidate build and scale your Oracle database implementation Each HP BladeSystem c7000 enclosure can accommodate up to 16 half-height blades or up to 8 full-height blades or a mixture of both In addition there are 8 Interconnect Bays with support for any IO fabric your database applications requires
Choosing a server for Oracle databases involves selecting a server that is the right mix of performance price and power efficiency with the most optimal management HPrsquos experience during test and in production has been that 2 or 4 socket servers are ideal platforms for Oracle database for the cloud depending on the workload requirements 2 and 4 socket systems offer better memory performance and memory scaling for databases that need large System Global Area (SGA) HP BladeSystem reduces costs and simplifies management through shared infrastructure
The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply efficiently and flexibly The arrays feature a tightly coupled clustered architecture secure multi-tenancy and mixed workload support for enterprise-class data centers Use of unique thin technologies reduces acquisition and operational costs by up to 50 while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions The HP 3PAR StoreServ Gen4 ASIC in each of the systemrsquos controller nodes provides a hyper-efficient silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels
Hardware description
For this white paper we rely on 2 main components of the HP Converged Infrastructure introduced earlier two HP ProLiant BL460c Gen8 servers and an HP 3PAR StoreServ 7200 as SAN storage as shown in figure 1
Figure 1 The c7000 and the 3PAR StoreServ 7200 used during this cookbook preparation This view is a subset of the fully populated enclosure
Software description
bull Red Hat Enterprise Linux Server release 64
bull Oracle Database 12cR1 Real Application Cluster
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5
Documentation
The table below lists the main documentation used during the creation of this white paper
Document ID Document title
E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (121) for Linux
E17720-15 Oracle Database Installation Guide 12c Release 1 (121) for Linux
Useful My Oracle Support Notes
Note ID Title
10893991 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
15140121 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6
15671271 RHEL 6 12c CVU Fails Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description
Multiple configurations are possible in order to build a RAC cluster This chapter will only deliver some information about the architecture we worked with during this project For the configuration tested two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used
In this section we will also look at the HP ProLiant blade infrastructure
Environment pre-requisites
Based on this architecture the adaptive infrastructure requirements for an Oracle RAC are
bull High speed communication link (ldquoprivaterdquo Virtual Connect network) between all nodes of the cluster (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches)
bull A common ldquopublicrdquo (Virtual Connect) communication link for communication with Oracle clients
bull The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting OCR and Database files)
bull At least two HP servers are required In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem
Oracle Grid Infrastructure installation server checklist
Network switches
bull Public network switch at least 1 GbE connected to a public gateway
bull Private network switch at least 1 GbE dedicated for use only with other cluster member nodes The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCPIP
Runlevel Servers should be either in runlevel 3 or runlevel 5
Random Access Memory (RAM) At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation including installations where you plan to install Oracle RAC
Temporary disk space allocation At least 1 GB allocated to tmp
Operating system
bull Supported in the list of supported kernels and releases listed in httpdocsoraclecomcdE16655_01install121e17888prelinuxhtmCIHFICFD
bull In our configuration Red Hat Enterprise Linux 6 Supported distributions
ndash Red Hat Enterprise Linux 6 2632-71el6x86_64 or later
ndash Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel 2632-100285el6x86_64 or later
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
4
Environment description
Drive business innovation and eliminate server sprawl with HP BladeSystem the industryrsquos only Converged Infrastructure architected for any workload from client to cloud HP BladeSystem is engineered to maximize every hour watt and dollar saving up to 56 total cost of ownership over traditional infrastructures
With HP BladeSystem it is possible to create a change ready power efficient network optimized simple to manage and high performance infrastructure on which to consolidate build and scale your Oracle database implementation Each HP BladeSystem c7000 enclosure can accommodate up to 16 half-height blades or up to 8 full-height blades or a mixture of both In addition there are 8 Interconnect Bays with support for any IO fabric your database applications requires
Choosing a server for Oracle databases involves selecting a server that is the right mix of performance price and power efficiency with the most optimal management HPrsquos experience during test and in production has been that 2 or 4 socket servers are ideal platforms for Oracle database for the cloud depending on the workload requirements 2 and 4 socket systems offer better memory performance and memory scaling for databases that need large System Global Area (SGA) HP BladeSystem reduces costs and simplifies management through shared infrastructure
The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply efficiently and flexibly The arrays feature a tightly coupled clustered architecture secure multi-tenancy and mixed workload support for enterprise-class data centers Use of unique thin technologies reduces acquisition and operational costs by up to 50 while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions The HP 3PAR StoreServ Gen4 ASIC in each of the systemrsquos controller nodes provides a hyper-efficient silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels
Hardware description
For this white paper we rely on 2 main components of the HP Converged Infrastructure introduced earlier two HP ProLiant BL460c Gen8 servers and an HP 3PAR StoreServ 7200 as SAN storage as shown in figure 1
Figure 1 The c7000 and the 3PAR StoreServ 7200 used during this cookbook preparation This view is a subset of the fully populated enclosure
Software description
bull Red Hat Enterprise Linux Server release 64
bull Oracle Database 12cR1 Real Application Cluster
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5
Documentation
The table below lists the main documentation used during the creation of this white paper
Document ID Document title
E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (121) for Linux
E17720-15 Oracle Database Installation Guide 12c Release 1 (121) for Linux
Useful My Oracle Support Notes
Note ID Title
10893991 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
15140121 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6
15671271 RHEL 6 12c CVU Fails Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description
Multiple configurations are possible in order to build a RAC cluster This chapter will only deliver some information about the architecture we worked with during this project For the configuration tested two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used
In this section we will also look at the HP ProLiant blade infrastructure
Environment pre-requisites
Based on this architecture the adaptive infrastructure requirements for an Oracle RAC are
bull High speed communication link (ldquoprivaterdquo Virtual Connect network) between all nodes of the cluster (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches)
bull A common ldquopublicrdquo (Virtual Connect) communication link for communication with Oracle clients
bull The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting OCR and Database files)
bull At least two HP servers are required In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem
Oracle Grid Infrastructure installation server checklist
Network switches
bull Public network switch at least 1 GbE connected to a public gateway
bull Private network switch at least 1 GbE dedicated for use only with other cluster member nodes The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCPIP
Runlevel Servers should be either in runlevel 3 or runlevel 5
Random Access Memory (RAM) At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation including installations where you plan to install Oracle RAC
Temporary disk space allocation At least 1 GB allocated to tmp
Operating system
bull Supported in the list of supported kernels and releases listed in httpdocsoraclecomcdE16655_01install121e17888prelinuxhtmCIHFICFD
bull In our configuration Red Hat Enterprise Linux 6 Supported distributions
ndash Red Hat Enterprise Linux 6 2632-71el6x86_64 or later
ndash Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel 2632-100285el6x86_64 or later
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5
Documentation
The table below lists the main documentation used during the creation of this white paper
Document ID Document title
E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (121) for Linux
E17720-15 Oracle Database Installation Guide 12c Release 1 (121) for Linux
Useful My Oracle Support Notes
Note ID Title
10893991 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat
15140121 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6
15671271 RHEL 6 12c CVU Fails Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description
Multiple configurations are possible in order to build a RAC cluster This chapter will only deliver some information about the architecture we worked with during this project For the configuration tested two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used
In this section we will also look at the HP ProLiant blade infrastructure
Environment pre-requisites
Based on this architecture the adaptive infrastructure requirements for an Oracle RAC are
bull High speed communication link (ldquoprivaterdquo Virtual Connect network) between all nodes of the cluster (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches)
bull A common ldquopublicrdquo (Virtual Connect) communication link for communication with Oracle clients
bull The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting OCR and Database files)
bull At least two HP servers are required In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem
Oracle Grid Infrastructure installation server checklist
Network switches
bull Public network switch at least 1 GbE connected to a public gateway
bull Private network switch at least 1 GbE dedicated for use only with other cluster member nodes The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCPIP
Runlevel Servers should be either in runlevel 3 or runlevel 5
Random Access Memory (RAM) At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation including installations where you plan to install Oracle RAC
Temporary disk space allocation At least 1 GB allocated to tmp
Operating system
bull Supported in the list of supported kernels and releases listed in httpdocsoraclecomcdE16655_01install121e17888prelinuxhtmCIHFICFD
bull In our configuration Red Hat Enterprise Linux 6 Supported distributions
ndash Red Hat Enterprise Linux 6 2632-71el6x86_64 or later
ndash Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel 2632-100285el6x86_64 or later
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6
bull Same operating system kernel running on each cluster member node
bull OpenSSH installed manually
Storage hardware either Storage Area Network (SAN) or Network-Attached Storage (NAS)
bull Local Storage Space for Oracle Software
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull For Linux x86_64 platforms allocate 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull Boot from SAN is supported
HP BladeSystem
HP combined its comprehensive technology to make BladeSystem not only easy to use but also useful to youmdashregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure
bull Intelligent infrastructure support Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking
bull HP Thermal Logic technologies Combine energy-reduction technologies such as the 80 PLUS Platinum 94 percent-
efficient HP 2650W2400W Platinum Power Supply with pinpoint measurement and control through Dynamic Power Capping to save energy and reclaim trapped capacity without sacrificing performance
bull HP Virtual Connect architecture Wire once then add replace or recover blades on the fly without impacting networks
and storage or creating extra steps
bull HP Insight Control This essential infrastructure management software helps save time and money by making it easy to deploy migrate monitor control and enhance your IT infrastructure through a single simple management console for your BladeSystem servers
bull HP Dynamic Power Capping Maintain an enclosurersquos power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers
bull HP Dynamic Power Saver Enable more efficient use of power in the server blade enclosure During periods of low server
utilization the Dynamic Power Saver places power supplies in standby mode incrementally activating them to deliver the required power as demand increases
bull HP Power Regulator Dynamically change each serverrsquos power consumption to match the needed processing
horsepower thus reducing power consumption automatically during periods of low utilization
bull HP NonStop midplane No single point of failure to keep your business up and running
bull HP Onboard Administrator Wizards get you up and running fast and are paired with useful tools to simplify daily tasks
warn of potential issues and assist you with repairs
HP administration tools were used to configure the HP environment as shown in figure 2
Figure 2 A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7
Further details on the HP BladeSystem can be found at hpcomgoBladeSystem
HP Virtual Connect
HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure It adds a hardware abstraction layer that removes the direct coupling between them Server administrators can physically wire the uplinks from the enclosure to its network connections once and then manage the network addresses and uplink paths through Virtual Connect software Using Virtual Connect interconnect modules provides the following capabilities
bull Reduces the number of cables required for an enclosure compared to using pass-through modules
bull Reduces the number of edge switches that LAN and SAN administrators must manage
bull Allows pre-provisioning of the networkmdashso server administrators can add replace or upgrade servers without requiring immediate involvement from the LAN or SAN administrators
bull Enables a flatter less hierarchical network reducing equipment and administration costs reducing latency and improving performance
bull Delivers direct server-to-server connectivity within the BladeSystem enclosure This is an ideal way to optimize for EastWest traffic flow which is becoming more prevalent at the server edge with the growth of server virtualization cloud computing and distributed applications
Without Virtual Connect abstraction changes to server hardware (for example replacing the system board during a service event) often result in changes to the MAC addresses and WWNs The server administrator must then contact the LANSAN administrators give them updated addresses and wait for them to make the appropriate updates to their infrastructure With Virtual Connect a server profile holds the MAC addresses and WWNs constant so the server administrator can apply the same networking profile to new hardware This can significantly reduce the time for a service event
Virtual Connect Flex-10 technology further simplifies network interconnects Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs) This lets you replace multiple lower-bandwidth NICs with a single 10 Gb adapter Prior to Flex-10 a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50 by consolidating all the NIC connections onto two 10 Gb ports
Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet Fibre Channel or accelerated iSCSI traffic into a single 10 Gb data stream (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function and (3) preserve routing information for all data types Flex-10 technology and FlexFabric adapters reduce management complexity the number of NICs HBAs and interconnect modules needed and associated power and operational costs Using FlexFabric technology lets you reduce the hardware requirements by 95 for a full enclosure of 16 virtualized serversmdashfrom 40 components to two FlexFabric modules
The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity Figure 3 and 4 show an example of the interface to the Virtual Connect environment
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8
Figure 3 View of the Virtual Connect Manager home page of the environment used
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9
Figure 4 The Virtual Connect profile of one of the cluster nodes
Further details on HP Virtual Connect technology can be found at hpcomgoVirtualConnect
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10
HP Onboard Administrator
The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure Together with the enclosures HP Insight Display the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class This module and its firmware provide
bull Wizards for simple fast setup and configuration
bull Highly available and secure access to the HP BladeSystem infrastructure
bull Security roles for server network and storage administrators
bull Agent-less device health and status
bull Thermal Logic power and cooling information and control
Each enclosure is shipped with one Onboard Administrator modulefirmware If desired a customer may order a second redundant Onboard Administrator module for each enclosure When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure they work in an active - standby mode assuring full redundancy with integrated management
Figure 5 below shows the information related to the enclosure we used in this exercise On the right side the front and rear view of the enclosure component is available By clicking on one component the detailed information will appear in the central frame
Figure 5 From the HP Onboard Administrator very detailed information related to the server information is available
More about the HP Onboard Administrator hpcomgooa
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11
Connectivity
The diagram in figure 6 below shows a basic representation of the components connectivity
Figure 6 Componentrsquos connectivity
System pre-requisites
This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database
Memory requirement
Check the available RAM and the swap space on the system The minimum required is 4GB in an Oracle RAC cluster
[rootoracle52 ~] grep MemTotal procmeminfo
MemTotal 198450988 kB
[rootoracle52 ~] grep SwapTotal procmeminfo
SwapTotal 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12
The swap volume may vary based on the RAM size As per the Oracle documentation the swap ratio should be the following
RAM Swap
4 to 16 GB 1 times the RAM size
gt 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume This is below the recommendation However because of the huge amount of RAM available we do not expect any usage of this swap space Keep in mind the swap activity negatively impacts database performance
The command swapon -s tells how much swap space exists on the system (in KB)
[rootoracle52 ~] swapon -s
Filename Type Size Used
Priority
devdm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption The -g extension provides values in GB
[rootoracle52 ~] free -g
total used free shared buffers cached
Mem 189 34 154 0 0 29
-+ bufferscache 5 184
Swap 3 0 3
Check the temporary space available
Oracle recommends having at least 1GB of free space in tmp
[rootoracle52 ~] df -h tmp
Filesystem Size Used Avail Use Mounted on
devmappermpathap2 39G 41G 33G 12
In our case tmp is part of Even if this is not an optimal setting we are far above the 1GB free space
Check for the kernel release
To determine which chip architecture each server is using and which version of the software you should install run the following command at the operating system prompt as the root user
[rootoracle52 ~] uname -m
x86_64
By the way note that Oracle 12c is not available for Linux 32-bit architecture
Then check the distribution and version you are using
[rootoracle53 ~] more etcredhat-release
Red Hat Enterprise Linux Server release 64 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13
Finally go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7
Figure 7 Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 64 supplement
HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution which is delivered as a single ISO image This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8 G7 and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hpcomgosppdocumentation See figure 8 for download information
For the pre-requisites about HP SUM look at the installation documentation httph18004www1hpcomproductsserversmanagementunifiedhpsum_infolibraryhtml
The latest SPP for Red Hat 64 as well as a supplement for RHEL 64 can be downloaded from hpcom httph20566www2hpcomportalsitehpsctemplatePAGEpublicpsiswdHomesp4tsoid=5177950ampspf_ptpst=swdMainampspf_pprp_swdMain=wsrp-navigationalState3DswEnvOID253D4103257CswLang253D257Caction253DlistDriverampjavaxportletbegCacheTok=comvignettecachetokenampjavaxportletendCacheTok=comvignettecachetokenApplication20-
Figure 8 Download location for the SPP
In order to install the SPP we need first to mount the ISO image Then from an X terminal run the hpsum executable
[rootoracle52 kits] mkdir cdrom
[rootoracle52 kits] mount -o loop=devloop0
HP_Service_Pack_for_Proliant_2013020-0_725490-001_spp_2013020-
SPP2013020B2013_06282iso cdrom
[rootoracle52 kits] cd cdromhpswpackages
[rootoracle52 kits]hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next
Provide the credentials for root and click Next
Select the components you need to install and click Install
A sample list of updates to be done is displayed Click OK the system will work for almost 10 to 15 minutes
Operation completed Check the log SPP will require a reboot of the server once fully installed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15
To install the RHEL 64 supplement for HP SPP you must first untar the file before running hpsum again
[rootoracle52 kits] mkdir supspprhel6
[rootoracle52 kits] mv supspprhel64entargz supspprhel6
[rootoracle52 kits] cd supspprhel6
[rootoracle52 kits] tar xvf supspprhel64entargz
[rootoracle52 kits] hpsum
Next follow the same procedure as with the regular SPP
A last option to consider regarding the SPP is the online upgrade repository service httpdownloadslinuxhpcomSDR
This site provides yum and apt repositories for Linux-related software packages Much of this content is also available from various locations at hpcom in ISO or tgz format but if you prefer to use yum or apt you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpmdeb packages from HP
Check for the newly presented shared LUNs
The necessary shared LUNs might have been presented after the last server reboot In order to discover new SCSI devices (like Fibre Channel SAS) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone
Find what the host number is for the HBA
[rootoracle52 ~] ls sysclassfc_host
host1 host2
1 Ask the HBA to issue a LIP signal to rescan the FC bus
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost1issue_lip
[rootoracle52 ~] echo 1 gtsysclassfc_hosthost2issue_lip
2 Wait around 15 seconds for the LIP command to have effect
3 Ask Linux to rescan the SCSI devices on that HBA
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost1scan
[rootoracle52 ~] echo - - - gtsysclassscsi_hosthost2scan
The wildcards - - - mean to look at every channel every target every LUN
Thats it You can look for log messages at ldquodmesgrdquo to see if its working and you can check at procscsiscsi to see if the devices are there
Alternatively once the SPP is installed an alternative is to use the hp_rescan utility Look for it in opthp
[rootoracle52 hp_fibreutils] hp_rescan -h
NAME
hp_rescan
DESCRIPTION
Sends the rescan signal to all or selected Fibre Channel HBAsCNAs
OPTIONS
-a --all - Rescan all Fibre Channel HBAs
-h --help - Prints this help message
-i --instance - Rescan a particular instance ltSCSI host numbergt
-l --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD It provides scsi-rescan (sym-linked to rescan-scsi-bussh)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16
Set the kernel parameters
Check the required kernel parameters by using the following commands
cat procsyskernelsem
cat procsyskernelshmall
cat procsyskernelshmmax
cat procsyskernelshmmni
cat procsysfsfile-max
cat procsysnetipv4ip_local_port_range
The following values should be the result
Parameter Value
kernelsemmsl 250
kernelsemmns 32000
kernelsemopm 100
kernelsemmni 128
kernelshmall physical RAM size pagesize ()
kernelshmmax Half of the RAM or 4GB ()
kernelshmmni 4096
fsfile-max 6815744
fsaio-max-nr 1048576
netipv4ip_local_port_range 9000 65500
netcorermem_default 262144
netcorermem_max 4194304
netcorewmem_default 262144
netcorewmem_max 1048576
() max is 4294967296
() 8239044 in our case
[rootoracle52 tmp] getconf PAGE_SIZE
4096
[rootoracle52 tmp] grep MemTotal procmeminfo
MemTotal 32956176 kB
In order to make these parameters persistent update the etcsysctlconf file
[rootoracle52 hp_fibreutils] vi etcsysctlconf
Controls the maximum shared segment size in bytes
kernelshmmax = 101606905856 Half the size of physical memory in bytes
Controls the maximum number of shared memory segments in pages
kernelshmall = 24806374 Half the size of physical memory in pages
fsaio-max-nr = 1048576
fsfile-max = 6815744
kernelshmmni = 4096
kernelsem = 250 32000 100 128
netipv4ip_local_port_range = 9000 65500
netcorermem_default = 262144
netcorermem_max = 4194304
netcorewmem_default = 262144
netcorewmem_max = 1048586
Run sysctl ndashp to load the updated parameters in the current session
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17
Check the necessary packages
The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c
bull binutils-2205102-511el6 (x86_64)
bull compat-libcap1-110-1 (x86_64)
bull compat-libstdc++-33-323-69el6 (x86_64)
bull compat-libstdc++-33-323-69el6i686
bull gcc-444-13el6 (x86_64)
bull gcc-c++-444-13el6 (x86_64)
bull glibc-212-17el6 (i686)
bull glibc-212-17el6 (x86_64)
bull glibc-devel-212-17el6 (x86_64)
bull glibc-devel-212-17el6i686
bull ksh
bull libgcc-444-13el6 (i686)
bull libgcc-444-13el6 (x86_64)
bull libstdc++-444-13el6 (x86_64)
bull libstdc++-444-13el6i686
bull libstdc++-devel-444-13el6 (x86_64)
bull libstdc++-devel-444-13el6i686
bull libaio-03107-10el6 (x86_64)
bull libaio-03107-10el6i686
bull libaio-devel-03107-10el6 (x86_64)
bull libaio-devel-03107-10el6i686
bull libXext-11 (x86_64)
bull libXext-11 (i686)
bull libXtst-10992 (x86_64)
bull libXtst-10992 (i686)
bull libX11-13 (x86_64)
bull libX11-13 (i686)
bull libXau-105 (x86_64)
bull libXau-105 (i686)
bull libxcb-15 (x86_64)
bull libxcb-15 (i686)
bull libXi-13 (x86_64)
bull libXi-13 (i686)
bull make-381-19el6
bull sysstat-904-11el6 (x86_64)
bull unixODBC-2214-11el6 (64-bit) or later
bull unixODBC-devel-2214-11el6 (64-bit) or later
The packages above are necessary in order to install Oracle The package release is the minimal release required You can check whether these packages are available or not with one of the following commands
rpm -q make-3791 check the exact release
or
rpm -qa|grep make syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18
Due to the specific 64-bit architecture of the x86_64 some packages are necessary in both 32 bits release and 64 bits releases The following command output will specify the based architecture of the specific package
rpm -qa --queryformat NAME-VERSIONRELEASE (ARCH)n | grep
glibc-devel
Finally installation of the packages should be done using yum This is the easiest way as long as a repository server is available
[rootoracle52 tmp] yum list libaio-devel
Loaded plugins rhnplugin security
Available Packages
libaio-develi386 03106-5 rhel-x86_64-server-5
libaio-develx86_64 03106-5 rhel-x86_64-server-5
[rootoracle52 tmp] yum install libaio-develi386
Loaded plugins rhnplugin security
Setting up Install Process
Resolving Dependencies
--gt Running transaction check
---gt Package libaio-develi386 003106-5 set to be updated
--gt Finished Dependency Resolution
Dependencies Resolved
============================================================================
Package Arch Version Repository Size
============================================================================
Installing
libaio-devel i386 03106-5 rhel-x86_64-server-5 12 k
Transaction Summary
============================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Total download size 12 k
Is this ok [yN] y
Downloading Packages
libaio-devel-03106-5i386rpm | 12 kB 0000
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing libaio-devel 11
Installed
libaio-develi386 003106-5
Complete
Checking shared memory file system mount
On Linux x86-64 ensure that the devshm mount area is of type tmpfs and is mounted with the following options
bull With rw and exec permissions set on it
bull Without noexec or nosuid set on it
Use the following procedure to check the shared memory file system
1 Check current mount settings For example
[rootoracle52 swpackages] more etcfstab |grep tmpfs
tmpfs devshm tmpfs defaults 0 0
[rootoracle52 ~] mount|grep tmpfs
tmpfs on devshm type tmpfs (rw)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2 If necessary change the mount settings For example log in as root open the etcfstab file with a text editor and
modify the tmpfs line
tmpfs devshm tmpfs rw exec 0 0
Preparing the network
Oracle RAC needs at least two physical interfaces The first one is dedicated to the interconnect traffic The second one will be used for public access to the server and for the Oracle Virtual-IP address as well In case you want to implement bonding consider additional network interfaces
For clusters using single interfaces for private networks each nodes private interface for interconnects must be on the same subnet and that subnet must be connected to every node of the cluster
For clusters using Redundant Interconnect Usage each private interface should be on a different subnet However each cluster member node must have an interface on each private interconnect subnet and these subnets must connect to every node of the cluster
Private interconnect redundant network requirements
With Redundant Interconnect Usage you can identify multiple interfaces to use for the cluster private network without the need of using bonding or other technologies This functionality is available starting with Oracle Database 11g Release 2 (11202) If you use the Oracle Clusterware Redundant Interconnect feature then you must use IPv4 addresses for the interfaces
When you define multiple interfaces Oracle Clusterware creates from one to four highly available IP (HAIP) addresses Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available load-balanced interface communication between nodes The installer enables Redundant Interconnect Usage to provide a high availability private network
By default Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication providing load-balancing across the set of interfaces you identify for the private network If a private interconnect interface fails or becomes non-communicative then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces
About the IP addressing requirement This installation guide documents how to perform a typical installation It doesnrsquot cover the Grid Naming Service For more information about GNS refer to the Oracle Grid Infrastructure Installation Guide for Linux
You must configure the following addresses manually in your corporate DNS
bull A public IP address for each node
bull A virtual IP address for each node
bull A private IP address for each node
bull Three single client access name (SCAN) addresses for the cluster Note the SCAN cluster name needs to be resolved by the DNS and should not be stored in the etchosts file Three addresses is a recommendation
Before moving forward we need to define the nodes and cluster information
Data Value
Cluster name okc12c
SCAN address 1 17216034
SCAN address 2 17216035
SCAN address 3 17216036
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20
Data Node 1 Node 2
Server public name oracle52 oracle53
Server public IP address 17216052 17216053
Server VIP name oracle52vip oracle53vip
Server VIP address 17216032 17216033
Server private name 1 oracle52priv0 oracle53priv0
Server private IP address 1 192168052 192168053
Server private name 2 oracle52priv1 oracle53priv1
Server private IP address 2 192168152 192168153
The current configuration should contain at least the following eth0 and eth1 as respectively public and private interfaces Please note the interface naming should be the same on all nodes of the cluster In the current case eth2 was also initialized in order to set up the private interconnect redundant network
[rootoracle52 ~] ip addr
1 lo ltLOOPBACKUPLOWER_UPgt mtu 16436 qdisc noqueue state UNKNOWN
linkloopback 000000000000 brd 000000000000
inet 1270018 scope host lo
inet6 1128 scope host
valid_lft forever preferred_lft forever
2 eth0 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3c brd ffffffffffff
inet 1721605321 brd 172160255 scope global eth0
inet6 fe80217a4fffe77ec3c64 scope link
valid_lft forever preferred_lft forever
3 eth1 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec3e brd ffffffffffff
inet 19216805324 brd 1921680255 scope global eth1
inet6 fe80217a4fffe77ec3e64 scope link
valid_lft forever preferred_lft forever
4 eth2 ltBROADCASTMULTICASTUPLOWER_UPgt mtu 1500 qdisc mq state UP qlen 1000
linkether 0017a477ec40 brd ffffffffffff
inet 19216815316 brd 192168255255 scope global eth2
inet6 fe80217a4fffe77ec4064 scope link
Enter into etchosts addresses and names for
bull interconnect names for system 1 and system 2
bull VIP addresses for node 1 and node 2
[rootoracle52 network-scripts] more etchosts
127001 localhost localhostlocaldomain localhost4 localhost4localdomain4
17216034 oracle34
17216035 scan2
17216036 scan3
192168052 oracle52priv0
192168053 oracle53priv0
192168152 oracle52priv1
192168153 oracle53priv1
17216032 oracle52vip
17216033 oracle53vip
17216052 oracle52
17216053 oracle53
During the installation process IPv6 can be unselected IPv6 is not supported for the private interconnect traffic
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes During installation the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware The time zone default is used for databases Oracle ASM and any other managed processes
Two options are available for time synchronization
bull An operating system configured network time protocol (NTP)
bull Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services If you use NTP then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode If you do not have NTP daemons then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server In this case Oracle will log warning messages into the CRS log as shown below These messages can be ignored
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
2010-09-17 165528920
[ctssd(15076)]CRS-2409The clock on host oracle52 is not synchronous with the
mean cluster time No action has been taken as the Cluster Time Synchronization
Service is running in observer mode
Update the etcntpconf file with the NTP server value
[rootoracle52 network-scripts] vi etcntpconf
Use public servers from the poolntporg project
Please consider joining the pool (httpwwwpoolntporgjoinhtml)
server 0rhelpoolntporg
server 1rhelpoolntporg
server 2rhelpoolntporg
server 17216052 ntp server address
Then restart the NTP service
[rootoracle52 network-scripts] sbinservice ntpd restart
Shutting down ntpd [ OK ]
Starting ntpd [ OK ]
Check if the NTP server is reachable The value in red needs to be higher than 0
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
In case the time difference between the database server and the NTP server is too large you might have to manually resynchronize your server Use the command below for this
[rootoracle52 ~] service ntpd stop
[rootoracle52 ~] ntpdate ntphpnet
[rootoracle52 ~] service ntpd start
If you are using NTP and you plan to continue using it instead of Cluster Time Synchronization Service then you need to modify the NTP configuration to set the -x flag which prevents time from being adjusted backward this is an Oracle
requirement Restart the network time protocol daemon after you complete this task
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22
To do this edit the etcsysconfigntpd file to add the -x flag as in the following example
[rootoracle52 network-scripts] vi etcsysconfigntpd
Drop root to id ntpntp by default
OPTIONS=-u ntpntp -p varrunntpdpid ndashg -x
Known issue
Sometimes the NTP server defined in the ntpconf acts as a load balancer and routes the request to different machines Then the ntpq ndashp will provide the same time but with a different refid (see below) this shouldnrsquot be a problem However Oracle cluster verification compares the refids and raises an error if they are different
[rootoracle53 kits] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 1721625510 3 u 6 64 1 128719 5275 0000
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntphpnet 172165810 3 u 3 64 1 108900 12492 0000
The error will be log as
INFO INFO Error MessagePRVF-5408 NTP Time Server 172165810 is common
only to the following nodes oracle52
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5408 NTP Time Server 1721625510 is common
only to the following nodes oracle53
INFO INFO Cause One or more nodes in the cluster do not synchronize with the
NTP Time Server indicated
INFO INFO Action At least one common NTP Time Server is required for a
successful Clock Synchronization check If there are none reconfigure all of
the nodes in the cluster to synchronize with at least one common NTP Time
Server
INFO INFO Error MessagePRVF-5416 Query of NTP daemon failed on all nodes
INFO INFO Cause An attempt to query the NTP daemon using the ntpq command
failed on all nodes
INFO INFO Action Make sure that the NTP query command ntpq is available on
all nodes and make sure that user running the CVU check has permissions to
execute it
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23
Figure 9 runInstaller error related to the NTP misconfiguration
In order to work around this issue it is mandatory to get the same refid on all nodes of the cluster Best case is to point to a single NTP server or to a GPS server as shown in the example below
[rootoracle52 ~] ntpq -p
remote refid st t when poll reach delay offset jitter
============================================================================
ntp2austinhp GPS 1 u 5 64 1 133520 15473 0000
Check the SELinux setting
In some circumstances the SELinux setting might generate some failure during the cluster check or the rootsh execution
In order to completely disable SELinux set disabled as value for the SELINUX parameter in etcselinuxconfig
[rootoracle53 ] more etcselinuxconfig
This file controls the state of SELinux on the system
SELINUX= can take one of these three values
enforcing - SELinux security policy is enforced
permissive - SELinux prints warnings instead of enforcing
disabled - SELinux is fully disabled
SELINUX=disabled
This update is static and requires a reboot of the server In order to update dynamically the SELinux value uses the following command
[rootoracle52 oraInventory] getenforce
Enforcing
[rootoracle52 oraInventory] setenforce 0
[rootoracle52 oraInventory] getenforce
Permissive
You might also have to disable the iptables in order to get access to the server using VNC
[rootoracle52 vnc] service iptables stop
iptables Flushing firewall rules [ OK ]
iptables Setting chains to policy ACCEPT filter [ OK ]
iptables Unloading modules [ OK ] For more about the iptables setting look at the Red Hat documentation here
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24
Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster Use useradd and groupadd parameters to specify
explicitly the uid and gid
Letrsquos check first if the uids and gids are used or not
[rootoracle52 ~] grep -E 504|505|506|507|508|509 etcgroup
[rootoracle52 ~]
[rootoracle52 ~] grep -E 502|501 etcpasswd
[rootoracle52 ~]
Then letrsquos create the users and groups
[rootoracle52 ~] usrsbingroupadd -g 504 asmadmin
[rootoracle52 ~] usrsbingroupadd -g 505 asmdba
[rootoracle52 ~] usrsbingroupadd -g 506 asmoper
[rootoracle52 ~] usrsbingroupadd -g 507 dba
[rootoracle52 ~] usrsbingroupadd -g 508 oper
[rootoracle52 ~] usrsbingroupadd -g 509 oinstall
usrsbinuseradd -g oinstall -G dbaasmdbaoper -s binbash -u 501 oracle
usrsbinuseradd -g oinstall -G asmadminasmdbaasmoperdba -s binbash -
u 502 grid
Oracle strongly encourages to carefully creating the users and passwords The general cluster and database behavior might be negatively impacted if the ownership rules are not respected This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users Thus check that users are members of the correct list or group
[rootoracle52 ~] id oracle
uid=501(oracle) gid=509(oinstall)
groups=509(oinstall)505(asmdba)507(dba)508(oper)
[rootoracle52 ~] id grid
uid=502(grid) gid=509(oinstall)
groups=509(oinstall)504(asmadmin)505(asmdba)506(asmoper)507(dba) Finally define the oracle and grid user password
[rootoracle52 sshsetup] passwd oracle
[rootoracle52 sshsetup] passwd grid
Configure the secure shell service
To install Oracle software Secure Shell (SSH) connectivity must be set up between all cluster member nodes Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy
files to the other cluster nodes You must configure SSH so that these commands do not prompt for a password Oracle Enterprise Manager also uses SSH
You can configure SSH from the OUI interface during installation for the user account running the installation The automatic configuration creates passwordless SSH connectivity between all cluster member nodes Oracle recommends that you use the automatic procedure if possible Itrsquos also possible to use a script provided in the Grid Infrastructure distribution
To enable the script to run you must remove stty commands from the profiles of any Oracle software installation
owners and remove other security measures that are triggered during a login and that generate messages to the terminal These messages mail checks and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer If they are not disabled then SSH must be configured manually before an installation can be run
In the current case the SSH setup was done using the Oracle script for both the grid and the oracle user During the script execution the user password needs to be provided 4 times We also included a basic connection check in the example below
The SSH setup script needs to be run on both nodes of the cluster
[rootoracle52 sshsetup] su ndash grid
[gridoracle52 sshsetup] sshUserSetupsh -user grid -hosts oracle52
oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
hellip
[gridoracle52 sshsetup]$ ssh oracle53 date
Wed Jul 24 140513 CEST 2013
[gridoracle52 sshsetup]$ exit
logout
[rootoracle52 sshsetup] su ndash oracle
[oracleoracle52 ~]$ sshUserSetupsh -user oracle -hosts oracle52 oracle53
[oracleoracle52 ~]$ ssh oracle53 date
Wed Jul 24 140216 CEST 2013
Issue The authorized_keys file was not correctly updated For a two way free passphrase access it is necessary to export manually the rsa file from the remote node to the local as described below
[gridoracle53 ssh]$ scp id_rsapub oracle52homegridsshrsaoracle53
[gridoracle52 ssh]$ cat rsaoracle53gtgtauthorized_keys
Alternatively it is also possible to set the secure shell between all nodes in the cluster manually
1 On each node check if ssh is already active
ssh nodename1 date
ssh nodename2 date
2 Generate key
ssh-keygen -b 1024 -t dsa
Accept default value without passphrase
3 Export public key to the remote node
cd ~ssh
scp id_dsapub nodename2sshid_dsa_usernamenodename1pub
4 Create the trusted connection file
cat id_dsapubgtgtauthorized_keys
cat id_dsa_usernamenodename1pubgtgtauthorized_keys
To establish whether SSH is correctly configured run the following commands
ssh nodename1 date
should send the date of node1
ssh nodename2 date
should send the date of node2
ssh private_interconnect_nodename1 date
should send the date of node1
ssh private_interconnect_clunodename2 date
should send the date of node2
If this works without prompting for any password the SSH is correctly defined
Note
The important point here is there is no password requested
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26
Set the limits
To improve the performance of the software you must increase the following shell limits for the oracle and grid users
Update etcsecuritylimitsconf with the following
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock 41984000
grid hard memlock 41984000
oracle soft memlock 41984000
oracle hard memlock 41984000
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 64 thus you must install the cvuqdisk RPM Without cvuqdisk Cluster Verification Utility cannot discover shared disks and you receive the error message ldquoPackage cvuqdisk
not installedrdquo when you run Cluster Verification Utility
To install the cvuqdisk RPM complete the following procedure
1 Locate the cvuqdisk RPM package which is in the directory rpm on the Oracle Grid Infrastructure installation media
2 Copy the cvuqdisk package to each node on the cluster
[rootoracle52 rpm] scp cvuqdisk-109-1rpm oracle53tmp
3 As root use the following command to find if you have an existing version of the cvuqdisk package
[rootoracle52 rpm] rpm -qi cvuqdisk
If you have an existing version then enter the following command to de-install the existing version
rpm -e cvuqdisk
4 Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk typically oinstall
For example
[rootoracle52 rpm] CVUQDISK_GRP=oinstall export CVUQDISK_GRP
5 In the directory where you have saved the cvuqdisk rpm use the following command to install the cvuqdisk
package
[rootoracle52 rpm] rpm -ivh cvuqdisk-109-1rpm
Preparing [100]
1cvuqdisk [100]
Storage connectivity driver configuration
Since Red Hat 53 and above only the QLogic and multipath inbox drivers are supported as stated in the quote below
ldquoBeginning with Red Hat RHEL 52 and Novell SLES 10 SP2 HP will offer a technology preview for inbox HBA drivers in a non-production environment HP will provide full support with subsequent Red Hat RHEL 53 and Novell SLES10 SP3 releasesrdquo
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27
httph20000www2hpcombizsupportTechSupportDocumentjsplang=enampcc=usamptaskId=120ampprodSeriesId=3559651ampprodTypeId=18964ampobjectID=c01430228
HP used to provide an enablement kit for the device-mapper This is not the case anymore with Red Hat 6x However a reference guide is still maintained and is available in the HP storage reference site SPOCK (login required) The document can be reached here
Check if the multipath driver is installed
[rootoracle52 yumreposd] rpm -qa |grep multipath
device-mapper-multipath-049-64el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
[rootoracle52 yumreposd] rpm -qa |grep device-mapper
device-mapper-persistent-data-014-1el6x86_64
device-mapper-event-libs-10277-9el6x86_64
device-mapper-event-10277-9el6x86_64
device-mapper-multipath-049-64el6x86_64
device-mapper-libs-10277-9el6x86_64
device-mapper-10277-9el6x86_64
device-mapper-multipath-libs-049-64el6x86_64
To check which HBAs are installed in the system use the lspci command
[rootoracle52 yumreposd] lspci|grep Fibre
05000 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
05001 Fibre Channel QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI
Express HBA (rev 02)
Check if the multipath daemon is already running
[rootoracle52 ~] chkconfig --list |grep multi
multipathd 0off 1off 2off 3on 4on 5on 6off
[rootoracle52 ~] service multipathd status
multipathd (pid 5907) is running
If the multipath driver is not enabled by default at boot change the configuration
chkconfig [--level levels] multipathd on
Configuration of the etcmultipathconf The etcmultipathconf file consists of the following sections to configure the attributes of a Multipath device
bull System defaults (defaults)
bull Black-listed devices (devnode_blacklistblacklist)
bull Storage array model settings (devices)
bull Multipath device settings (multipaths)
bull Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever required settings are unavailable The blacklist section defines which devices should be excluded from the multipath topology discovery The blacklist_exceptions section defines which devices should be included in the multipath topology discovery
despite being listed in the blacklist section The multipaths section defines the multipath topologies They are indexed by a World Wide Identifier (WWID) The devices section defines the device-specific settings based on vendor
and product values
Check the current fresh installed configuration
[rootoracle52 yumreposd] multipathd -k
multipathdgt show Config
hellip
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
28
multipaths
multipath
wwid 360002ac0000000000000001f00006e40
mode 0600
uid 00
gid 00
multipathdgt
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in the user needs to modify etcmultipathconf It is advisable to include the array which is already built-in as well For now our multipathconf file looks like this
[rootoracle52 yumreposd] more etcmultipathconf
multipathconf written by anaconda
defaults
user_friendly_names yes
blacklist
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]
devnode ^hd[a-z]
devnode ^dcssblk[0-9]
device
vendor DGC
product LUNZ
device
vendor IBM
product S390
dont count normal SATA devices as multipaths
device
vendor ATA
dont count 3ware devices as multipaths
device
vendor 3ware
device
vendor AMCC
nor highpoint devices
device
vendor HPT
device
vendor HP
product Virtual_DVD-ROM
wwid
blacklist_exceptions
wwid 360002ac0000000000000001f00006e40
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29
We need to add the following HP 3PAR array profiles and suggested settings to the etcmultipathconf file under
the ldquodevicesrdquo section and use these values
multipathconf written by anaconda
defaults
user_friendly_names yes
devices
device
vendor 3PARdata
product VV
path_grouping_policy multibus
getuid_callout libudevscsi_id --whitelisted --device=devn
path_selector round-robin 0
path_checker tur
hardware_handler 0
failback immediate
rr_weight uniform
rr_min_io_rq 100
no_path_retry 18
Update the QLogic FC HBA configuration
[rootoracle52 yumreposd] more etcmodprobedfc-hbaconf
options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=10
options lpfc lpfc_lun_queue_depth=16 lpfc_nodev_tmo=30
lpfc_discovery_threads=32
Then rebuild the initramfs
[rootoracle52 yumreposd] cd boot
[[rootoracle52 boot] mv initramfs-2632-358el6x86_64img initramfs-2632-
358el6x86_64imgyan
[rootoracle52 boot] dracut
Finally we may update the boot menu for rollback purpose Add the part below that is in red
[rootoracle52 boot] cd bootgrub
[rootoracle52 grub] vi menulst
grubconf generated by anaconda
Note that you do not have to rerun grub after making changes to this file
NOTICE You have a boot partition This means that
all kernel and initrd paths are relative to boot eg
root (hd00)
kernel vmlinuz-version ro root=devmappermpathap2
initrd initrd-[generic-]versionimg
boot=devmpatha
default=0
timeout=5
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64img
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30
title Red Hat Enterprise Linux (2632-358el6x86_64)
root (hd00)
kernel vmlinuz-2632-358el6x86_64 ro root=UUID=51b7985c-3b07-4543-
9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd initramfs-2632-358el6x86_64img
title Red Hat Enterprise Linux Server (2632-358141el6x86_64) bkp
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto
initrd initramfs-2632-358141el6x86_64imgyan
The QLogic parameters will only be used after the next reboot
Enable the multipathing for the Oracle shared volume The multipath devices are created in the devmapper directory of the hosts These devices are similar to any other
block devices present in the host and are used for any block or file level IO operations such as creating the file system
You must use the devices under devmapper You can create a user friendly named device alias by using the alias
and the WWID attributes of the multipath device present in the multipath subsection of the etcmultipathconf
file
We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53 So far only the system LUN is configured To check the available paths to the root device execute the following command
[rootoracle52 yumreposd] multipath -l
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
Next we have to make sure we have persistent device names within the cluster With the default settings in etcmultipathconf it is necessary to reconfigure the mapping information by using the ndashv0 parameter of the ldquomultipathrdquo command
[rootoracle52 ~] multipath -v0
[rootoracle52 ~] multipath -l
mpathd (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpathc (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
mpathb (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=0 hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
[rootoracle52 ~]
[rootoracle52 ~] ls devmapper
control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31
These WWIDs can now be used to create customized multipath device names in adding the entries below to the etcmultipathconf
multipaths
multipath
uid 0
gid 0
wwid 360002ac0000000000000001f00006e40
mode 0600
multipath
wwid 360002ac0000000000000002100006e40
alias voting
multipath
wwid 360002ac0000000000000002200006e40
alias data01
multipath
wwid 360002ac0000000000000002300006e40
alias fra01
In order to create the multipath devices with the defined alias names execute multipath -v0 (you may need to execute multipath -F before to get rid of the old device names)
[rootoracle52 ~] multipath -F
[rootoracle52 ~] multipath ndashv1
fra01
data01
voting
[rootoracle52 ~] ls devmapper
control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
[rootoracle52 ~] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdataVV
size=100G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1000 sda 80 active undef running
`- 2000 sde 864 active undef running
With 12c we do not need to bind the block device to the raw device as raw is not supported anymore
If we were not using ASMLib we would need to manage the right level of permission to the shared volume This can be achieved by two ways
1 Updating the rclocal file
2 Create a udev rule (see example below which is not relevant to our environment)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32
In such a case we would have to update the system as below The file called ldquo99-oraclerulesrdquo is a copy of etcudevrulesd60-rawrules which has been updated with our own data
[rootdbkon01 rulesd] pwd
etcudevrulesd
[rootdbkon01 rulesd] more 99-oraclerules
This file and interface are deprecated
Applications needing raw device access should open regular
block devices with O_DIRECT
Enter raw device bindings here
An example would be
ACTION==add KERNEL==sda RUN+=binraw devrawraw1 N
to bind devrawraw1 to devsda or
ACTION==add ENVMAJOR==8 ENVMINOR==1 RUN+=binraw
devrawraw2 M m
to bind devrawraw2 to the device with major 8 minor 1
Oracle Configuration Registry
KERNEL== mappervoting OWNER=root GROUP=oinstall MODE=640
Voting Disks
KERNEL==mapperdata01 OWNER=oracle GROUP=dba MODE=660
KERNEL==mapperfra01 OWNER=oracle GROUP=dba MODE=660
However as ASMLib is used there is no need to ensure permissions and device path persistency in udev
Install the ASMLib support library
Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances
ASMLib was initially developed by Oracle for the major paid Linux distribution However since Red Hat 60 Oracle only provides this library to Oracle Linux
Since version 64 Red Hat (RH) does provide its own library It is part of the supplementary channel As of version 6 the RH ASMLib is not supported
HP published some time ago a white paper describing how to articulate the device-mapper with ASMLib This white paper is available here
ASMLib consists of the following components
bull An open source (GPL) kernel module package kmod-oracleasm (provided by Red Hat)
bull An open source (GPL) utilities package oracleasm-support (provided by Oracle)
bull A closed source (proprietary) library package oracleasmlib (provided by Oracle)
The Oracle packages can be downloaded from here
For the installation move to the directory where the packages are located and install them
[rootoracle52 ASMLib] yum install kmod-oracleasm-206rh1-2el6x86_64rpm
oracleasmlib-204-1el6x86_64rpm oracleasm-support-218-1el6x86_64rpm
The ASM driver needs to be loaded and the driver filesystem needs to be mounted This is taken care of by the initialization script etcinitdoracleasm
Run the etcinitdoracleasm script with the configure option It will ask for the user and group that default to
owning the ASM driver access point This step has to be done on every node of the cluster
[rootoracle52 ASMLib] usrsbinoracleasm init
[rootoracle52 ASMLib] etcinitdoracleasm configure
Configuring the Oracle ASM library driver
This will configure the on-boot properties of the Oracle ASM library
driver The following questions will determine whether the driver is
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33
loaded on boot and what permissions it will have The current values
will be shown in brackets ([]) Hitting ltENTERgt without typing an
answer will keep that current value Ctrl-C will abort
Default user to own the driver interface [] grid
Default group to own the driver interface [] asmadmin
Start Oracle ASM library driver on boot (yn) [n] y
Scan for Oracle ASM disks on boot (yn) [y] y
Writing Oracle ASM library driver configuration done
Initializing the Oracle ASMLib driver [ OK ]
Scanning the system for Oracle ASMLib disks [ OK ]
The disableenable option of the oracleasm script will activate or not the automatic startup of the package
The system administrator has one last task Every disk that ASMLib is going to be accessed needs to be created and made available This is accomplished by creating an ASM disk once for the entire cluster
[rootoracle52 ASMLib] oracleasm createdisk VOTING devmappervoting
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk DATA01 devmapperdata01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm createdisk FRA01 devmapperfra01
Writing disk header done
Instantiating disk done
[rootoracle52 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
When a disk is added to a RAC setup the other nodes need to be notified about it Run the createdisk command on one node and then run scandisks on every other node
[rootoracle53 ASMLib] oracleasm scandisks
Reloading disk partitions done
Cleaning any stale ASM disks
Scanning system for ASM disks
[rootoracle53 ASMLib] oracleasm listdisks
DATA01
FRA01
VOTING
Finally check the ownership of the asm devices It should be member of the asmadmin group
[rootoracle52 ASMLib] ls -l devoracleasmdisks
brw-rw---- 1 grid asmadmin 253 5 Jul 25 1526 DATA01
brw-rw---- 1 grid asmadmin 253 4 Jul 25 1526 FRA01
brw-rw---- 1 grid asmadmin 253 6 Jul 25 1526 VOTING
There are some other useful commands like deletedisk querydisk listdisks etc
In order to optimize the scanning effort of Oracle when preparing the ASM disks we can update the oracleasm parameter file as below In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process
[rootoracle52 ~] vi etcsysconfigoracleasm
ORACLEASM_SCANORDER Matching patterns to order disk scanning
ORACLEASM_SCANORDER=devmapper
ORACLEASM_SCANEXCLUDE Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=sd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot
[rootoracle52 sysconfig] chkconfig --list oracleasm
oracleasm 0off 1off 2on 3on 4on 5on 6off
Check the available disk space
Starting with RAC 11gR2 only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME) Oracle considers that storage and cluster management are system administration tasks while the database is a dba task
The $ORACLE_BASE of the grid and the oracle users must be different
For the installation we need the following disk space
bull At least 35 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) The Oracle base includes Oracle Clusterware and Oracle ASM log files
bull 58 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)
bull OCR and Voting disks Need one of each or more if external redundancy is used The size of each file is 1GB
bull Database space Depends on how big the database will be Oracle recommends at least 2GB
bull Temporary space Oracle requires 1GB space in tmp tmp is used by default or it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation
In this example we created the following directories
Path Usage Size
u01apporacle $ORACLE_BASE for the oracle db owner 58GB
u01apporacle12c $ORACLE_HOME for the oracle db user ndash
u01appbase $ORACLE_BASE for the grid owner 35GB
u01appgrid12c $ORACLE_HOME for the grid user ndash
devoracleasmdisksFRA01 Flash recovery area (ASM) 20GB
devoracleasmdisksVOTING OCR (volume) 2GB
devoracleasmdisksDATA01 Database (volume) 20GB
Create the inventory location
[rootoracle52 ~] mkdir -p u01apporacleoraInventory
[rootoracle52 ~] chown -R gridoinstall u01apporacleoraInventory
[rootoracle52 ~] chmod -R 775 u01apporacleoraInventory
Create the installation directories and set the accurate privileges on both nodes for the grid user
[rootoracle53 u01] mkdir -p u01appgrid12c
[rootoracle53 u01] chown -R gridoinstall u01appgrid
[rootoracle53 u01] chmod -R 775 u01appgrid
Create the installation directories and set the accurate privileges on both nodes for the oracle user
[rootoracle52 oracle] mkdir u01apporacle12c
[rootoracle52 oracle] chown -R oracleoinstall u01apporacle
[rootoracle52 oracle] chmod -R 775 u01apporacle
Setting the disk IO scheduler on Linux
Disk IO schedulers reorder delay or merge requests for disk IO to achieve better throughput and lower latency Linux has multiple disk IO schedulers available including Deadline Noop Anticipatory and Completely Fair Queuing (CFQ) For best performance with Oracle ASM Oracle recommends that you use the Deadline IO Scheduler
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35
In order to change the IO scheduler we first need to identify the device-mapper path for each and every ASM disk
[rootoracle52 sys] multipath -l
data01 (360002ac0000000000000002200006e40) dm-5 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1002 sdc 832 active undef running
`- 2002 sdg 896 active undef running
fra01 (360002ac0000000000000002300006e40) dm-4 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1001 sdb 816 active undef running
`- 2001 sdf 880 active undef running
voting (360002ac0000000000000002100006e40) dm-6 3PARdataVV
size=20G features=1 queue_if_no_path hwhandler=0 wp=rw
`-+- policy=round-robin 0 prio=0 status=active
|- 1003 sdd 848 active undef running
`- 2003 sdh 8112 active undef running
An alternative for identifying the LUN is to use the scsi_id For instance
[rootoracle52 sys] scsi_id --whitelist --replace-whitespace --
device=devmapperdata01
360002ac0000000000000002200006e40
On each cluster node enter the following command to ensure that the Deadline disk IO scheduler is configured for use
[rootoracle52 sys] echo deadline gt sysblockdm-4queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-5queuescheduler
[rootoracle52 sys] echo deadline gt sysblockdm-6queuescheduler
Next check that the IO scheduler status has been updated
[rootoracle52 sys] cat sysblockdm-6queuescheduler
noop anticipatory [deadline] cfq
In order to make this change persistent we can update etcgrubconf
[rootoracle52 sys] vi etcgrubconf
splashimage=(hd00)grubsplashxpmgz
hiddenmenu
title Red Hat Enterprise Linux Server (2632-358141el6x86_64)
root (hd00)
kernel vmlinuz-2632-358141el6x86_64 ro root=UUID=51b7985c-3b07-
4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_USUTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation the installer requires you to run scripts with superuser (or root) privileges to
complete a number of system configuration tasks
You can continue to run scripts manually as root or you can delegate to the installer the privilege to run configuration steps as root using one of the following options
bull Use the root password Provide the password to the installer as you are providing other configuration information
The password is used during installation and not stored The root user password must be identical on each cluster member node To enable root command delegation provide the root password to the installer when prompted
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36
bull Use Sudo Sudo is a UNIXreg and Linux utility that allows members of the sudoers list privileges to run individual commands as root To enable Sudo have a system administrator with the appropriate privileges configure a user that is a member
of the sudoers list and provide the username and password when prompted during installation
[rootoracle52 sys] visudo
Allow root to run any commands anywhere
root ALL=(ALL) ALL
grid ALL=(ALL) NOPASSWD ALL
oracle ALL=(ALL) NOPASSWD ALL
Once this setting is enabled grid and oracle users can act as root by prefixing each and every command with a sudo For instance
[rootoracle52 sys] su - grid
[gridoracle52 ~]$ sudo yum install glibc-utilsx86_64
Loaded plugins product-id refresh-packagekit rhnplugin security
subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite
Setting up Install Process
Obviously enabling sudo for grid and oracle users raises security issues It is recommended to turn sudo off right after the complete binary installation
Oracle Clusterware installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in bash_profile on all your cluster nodes
export ORACLE_BASE=u01appbase
export ORACLE_HOME=u01appgrid12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Check the environment before installation
In order for runcluvfysh to run correctly with Red Hat 6 redhat-release-6Server-1noarchrpm needs to be installed This is a dummy rpm which has to be installed as root user as follows
[rootoracle53 kits] rpm -ivh redhat-release-6Server-1noarchrpm
Preparing [100]
1redhat-release [100]
This is required because runcluvfy runs the following rpm command rpm -q --qf version redhat-release-server and expects 6Server to be returned In Red Hat 6 redhat-release-server rpm does not exist
Download the rpm from ldquoMy oracle Support Doc ID 15140121rdquo Donrsquot be confused by the platform download the clupackzip file which is attached to the document and install the package
Then run the cluster verify utility ndash which is located in the base directory of the media file ndash and check for some missing setup
runcluvfysh stage -pre crsinst -n oracle52oracle53 ndash
verbosegtgttmpcluvfylog
In our case an error related to the swap space was reported We can ignore it
RunInstaller
Start the runInstaller from your distribution location The runInstaller program is located in the root directory of the distribution
In order to run the installer graphical interface itrsquos necessary to setup a vncserver session or a terminal X and a Display
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment there is no need for an automatic update Any automatic update would be a customer strategy
Select ldquoInstall and Configure Oracle Grid Infrastructure for a Clusterrdquo
In this example the goal is to install a standard cluster not a flex cluster
Select Advanced Installation
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed
Enter cluster name and SCAN name Remember the SCAN name needs to be resolved by the DNS For high availability purposes Oracle recommends using 3 IP addresses for the SCAN service The service will also work if only one is used
Configure the public and VIP names of all nodes in the cluster The SSH setting was done earlier It is also possible to double-check if everything is fine from this screen A failure here will prevent the installation from being successful Then click Next
Define the role for the Ethernet port As mentioned earlier we dedicated 2 interfaces for the private interconnect traffic Oracle will enable HA capacity using the 2 interfaces
17216052
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository
Oracle recommends using Standard ASM as the storage option We pre-configured the system for the ASM implementation
In this screen it is time to create a first ASM diskgroup This diskgroup will be used to store the cluster voting disk as well as the OCR repository
Define the password for the ASM instance
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system With Oracle 12c Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity You can configure node-termination during installation by selecting a node-termination protocol such as IPMI
Define the group for the ASM instance owner accordingly with the groups initially created
Check the path for $ORACLE_BASE and $ORACLE_HOME Once again both directories should be parallel $ORACLE_HOME canrsquot be a subdirectory of $ORACLE_BASE
Set the Inventory location with the path earlier created
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password
The first warning can be ignored It is related to the swap space as explained earlier
Regarding the second warning
- PRVF-5150 Path ORCLDISK1 is not a valid path on all nodes
Operation Failed on Nodes []
Refer to the My Oracle Support (MOS) Note ldquoDevice Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid pathrdquo
MOS DOC Device Checks for ASM Fails with PRVF-5150 Path ORCL is not a valid path [ID 12108631]
Solution
At the time of this writing bug 10026970 is fixed in 11203 which is not released yet If the ASM device passes manual verification the warning can be ignored
Manual Verification
To verify ASMLib status
$etcinitdoracleasm status
Checking if ASM is loaded yes
Checking if devoracleasm is mounted yes
[gridoracle52 ~] dd if=devoracleasmdisksDATA01 of=devnull bs=1024k
count=1
1+0 records in
1+0 records out
1048576 bytes (10 MB) copied 000401004 s 261 MBs
Confirm that we want to ignore the warnings
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings
Click Yes for running the ldquosudo rootshrdquo command
Click Next
Installation completed Click Close The installation log is located in u01apporacleoraInventorylogs
Check the installation
Processes
Check that the processes are running on both nodes
ps ndashef|grep ora
ps ndashef|grep dbin
Nodes information
olsnodes provides information about the nodes in the CRS cluster and their interfaces This is roughly similar to the previous releases
[gridoracle52 ~]$ olsnodes -h
Usage olsnodes [ [ [-n] [-i] [-s] [-t] [ltnodegt | -l [-p]] ] | [-c] | [-a] ] [-
g] [-v]
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43
where
-n print node number with the node name
-p print private interconnect address for the local node
-i print virtual IP address with the node name
ltnodegt print information for the specified node
-l print information for the local node
-s print node status - active or inactive
-t print node type - pinned or unpinned
-g turn on logging
-v Run in debug mode use at direction of Oracle Support only
-c print clusterware name
-a print active node roles of the nodes in the cluster
[gridoracle52 ~]$ olsnodes
oracle52
oracle53
[gridoracle52 ~]$ olsnodes -i -n
oracle52 1 oracle52vip
oracle53 2 oracle53vip Check the status of the cluster layer
[gridoracle52 ~]$ crsctl check crs
CRS-4638 Oracle High Availability Services is online
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster Nevertheless the crs_stat command is deprecated and has been replaced by crsctl status resource The crs_stat command remains for backward compatibility only
crsctl does much more than crs_stat as it will manage the entire cluster resources
[gridoracle52 ~]$ crsctl -h
Usage crsctl add - add a resource type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl eval - evaluate operations on resource or other entity
without performing them
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease or an action entrypoint
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44
The command below shows in short the status of the CRS processes of the cluster
[rootoracle52 ~] crsctl check cluster -all
oracle52
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
oracle53
CRS-4537 Cluster Ready Services is online
CRS-4529 Cluster Synchronization Services is online
CRS-4533 Event Manager is online
The command below shows the status of the CRS processes
[root oracle52 ohasd] crsctl stat res -t -init
[gridoracle52 ~]$ crsctl stat res -t -init
-----------------------------------------------------------------------------
Name Target State Server State details
-----------------------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------------------
oraasm
1 ONLINE ONLINE oracle52 StartedSTABLE
oracluster_interconnecthaip
1 ONLINE ONLINE oracle52 STABLE
oracrf
1 ONLINE ONLINE oracle52 STABLE
oracrsd
1 ONLINE ONLINE oracle52 STABLE
oracssd
1 ONLINE ONLINE oracle52 STABLE
oracssdmonitor
1 ONLINE ONLINE oracle52 STABLE
oractssd
1 ONLINE ONLINE oracle52 OBSERVERSTABLE
oradiskmon
1 OFFLINE OFFLINE STABLE
oradriversacfs
1 ONLINE ONLINE oracle52 STABLE
oraevmd
1 ONLINE ONLINE oracle52 STABLE
oragipcd
1 ONLINE ONLINE oracle52 STABLE
oragpnpd
1 ONLINE ONLINE oracle52 STABLE
oramdnsd
1 ONLINE ONLINE oracle52 STABLE
orastorage
1 ONLINE ONLINE oracle52 STABLE
The command below can be used with ldquo-trdquo extension for shorter output
[gridoracle52 ~]$ crsctl stat res
NAME=oraDATAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraFRAdg
TYPE=oradiskgrouptype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45
NAME=oraLISTENERlsnr
TYPE=oralistenertype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraLISTENER_SCAN1lsnr
TYPE=orascan_listenertype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraMGMTLSNR
TYPE=oramgmtlsnrtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraasm
TYPE=oraasmtype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oracvu
TYPE=oracvutype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oramgmtdb
TYPE=oramgmtdbtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oranet1network
TYPE=oranetworktype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoc4j
TYPE=oraoc4jtype
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraons
TYPE=oraonstype
TARGET=ONLINE ONLINE
STATE=ONLINE on oracle52 ONLINE on oracle53
NAME=oraoracle52vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle52
NAME=oraoracle53vip
TYPE=oracluster_vip_net1type
TARGET=ONLINE
STATE=ONLINE on oracle53
NAME=orascan1vip
TYPE=orascan_viptype
TARGET=ONLINE
STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
46
Although depreciated since 11gR2 crs_stat still works
[gridoracle52 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
oraDATAdg orauptype ONLINE ONLINE oracle52
oraFRAdg orauptype ONLINE ONLINE oracle52
oraERlsnr oraertype ONLINE ONLINE oracle52
oraN1lsnr oraertype ONLINE ONLINE oracle52
oraMGMTLSNR oranrtype ONLINE ONLINE oracle52
oraasm oraasmtype ONLINE ONLINE oracle52
oracvu oracvutype ONLINE ONLINE oracle52
oramgmtdb oradbtype ONLINE ONLINE oracle52
oranetwork orarktype ONLINE ONLINE oracle52
oraoc4j oraoc4jtype ONLINE ONLINE oracle52
oraons oraonstype ONLINE ONLINE oracle52
oraSM1asm application ONLINE ONLINE oracle52
ora52lsnr application ONLINE ONLINE oracle52
orae52ons application ONLINE ONLINE oracle52
orae52vip orat1type ONLINE ONLINE oracle52
oraSM2asm application ONLINE ONLINE oracle53
ora53lsnr application ONLINE ONLINE oracle53
orae53ons application ONLINE ONLINE oracle53
orae53vip orat1type ONLINE ONLINE oracle53
orascan1vip oraiptype ONLINE ONLINE oracle52
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster Because the SCAN is associated with the cluster as a whole rather than to a particular node the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients It also adds location independence for the databases so that client configuration does not have to depend on which nodes are running a particular database instance Clients can continue to access the cluster in the same way as with previous releases but Oracle recommends that clients accessing the cluster use SCAN
[gridoracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)
Checking TCP connectivity to SCAN Listeners
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for oracle34
Checking integrity of name service switch configuration file
etcnsswitchconf
All nodes have same hosts entry defined in file etcnsswitchconf
Check for integrity of name service switch configuration file
etcnsswitchconf passed
Checking SCAN IP addresses
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful
ASM disk group creation
Since 11gR2 Oracle provides a GUI tool called ldquoASMCArdquo which can simplify the creation and the management for the ASM disk group Now therersquos minimal learning curve associated with configuring and maintaining an ASM instance ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM ASMCA supports the majority of Oracle Database features such as the ASM cluster file system (ACFS) and volume management
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47
The ASMCA application is run by the Grid Infrastructure owner Just launch it with ASMCA
Existing disk groups are already listed
Click ldquoCreaterdquo to create a new disk group ASMCA will recognize the candidate disks we created using ASMLib
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer Note also we used ldquoExternalrdquo redundancy as we do not need any extra failure group
Disk group successfully created
The 2 disk groups are now created but not mounted on all nodes Click ldquoMount Allrdquo to mount them all
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click ldquoYesrdquo to confirm
The disk groups are ready We can now quit ldquoASMCArdquo
We can also list the disk groups from a command line interface
[gridoracle52 ~]$ ORACLE_SID=+ASM1
[gridoracle52 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 20480 14576
0 14576 0 Y DATA
MOUNTED EXTERN N 512 4096 1048576 20480 20149
0 20149 0 N FRA
MOUNTED EXTERN N 512 4096 1048576 20480 20384
0 20384 0 N VOTING
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49
Oracle RAC 12c database installation
Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set inbash_profile on all your cluster nodes
export ORACLE_BASE=u01apporacle
export ORACLE_HOME=u01apporacle12c Note in 12c the $GRID_HOME shouldnrsquot be a subdirectory of the $ORACLE_BASE
Installation
Login as oracleoinstall user and start the runInstaller from your distribution location
Define here whether to receive security updates from My Oracle Support or not
A warning message is displayed if we decline the previous suggestion
Define here whether to use the software updates from My Oracle Support or not
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now we just want to install the binaries The database will be created later with DBCA
Select RAC installation
The nodes members of the RAC cluster are selected in this screen The SSH setup or verification can also be done in this screen
Select Languages in this screen
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs ndash sockets ndash cluster maximum
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed
Define the operating system groups to be used
The pre-installation system check raises a warning on the swap space As said earlier this can be ignored
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning
And here is a summary of the selected options before the installation
The installation is ongoing
Run rootsh from a console on both nodes of the cluster
[rootoracle53 kits] cd u01apporacle12c
[rootoracle53 12c] rootsh
Performing root user operation for Oracle 12c
The following environment variables are set as
ORACLE_OWNER= oracle
ORACLE_HOME= u01apporacle12c
Enter the full pathname of the local bin directory [usrlocalbin]
The contents of dbhome have not changed No need to overwrite
The contents of oraenv have not changed No need to overwrite
The contents of coraenv have not changed No need to overwrite
Entries will be added to the etcoratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script
Now product-specific root actions will be performed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed
Create a RAC database
Create a RAC database
Get connected as ldquooraclerdquo user then start DBCA from a node A terminal X access is needed here again (unless using the silent mode based on answer file not documented here)
The 12c DBCA offers some new options in this screen like ldquoManage Pluggable Databaserdquo and ldquoInstance Managementrdquo For now we will create a new database
In this stage we can either create a new database using a template or customize the new database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use Also note this new DBCA 12c option it is now possible to see what parameters are used in the template database
The parameter detail screen is displayed
Define the name of the new database
The ldquoServer Poolrdquo is a 12c new option The server pool allows to create server profiles and to run RAC database in it It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script We can also configure the EM Cloud Control which is a new management feature for 12c
Here we define the credentials for the Oracle database
Specify the database location
Select sample schema and security options if needed
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database
Ready to install
Oracle runs the cluster and configuration checks again We still have an alert on the swap size We can ignore it
Last check before the installation Click Finish
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress
Database creation completed
Post-installation steps
The service (aka sqlnet) allows the connection to the database instances Since 11gR2 the way it works slightly changes as Oracle introduced the SCAN service (seen earlier)
First we need to check that the listeners are up and running
[rootoracle52 ~] ps -ef|grep LISTENER|grep -v grep
grid 10466 1 0 Jul26 000009 u01appgrid12cbintnslsnr
LISTENER_SCAN1 -no_crs_notify -inherit
grid 12601 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify ndashinherit
[rootoracle53 ~] ps -ef|grep LISTENER|grep -v grep
grid 22050 1 0 Jul26 000010 u01appgrid12cbintnslsnr
LISTENER -no_crs_notify -inherit
Then we need to check the listener definition within the database allocation parameters Note a consequence of the SCAN new feature the remote_listener points to the SCAN service instead of a list of node listeners
In node 1
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216032)(PORT=1521)) SQLgt
show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2
SQLgt show parameter local_lis
NAME TYPE VALUE
--------------------------------- ----------- ------------------------------
local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST=
17216033)(PORT=1521))
SQLgt show parameter remote_listener
NAME TYPE VALUE
--------------------------------- ----------- ---------------------------
remote_listener string oracle34 1521
Look at the listenerora files The listening service is part of the cluster Thus the file is located in $GRID_HOME (owned by the grid user)
Below is the output from node 1 and then the output from node 2
[gridoracle52 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))))
line added by Agent
listenerora Network Configuration File
u01appgrid12cnetworkadminlistenerora
Generated by Oracle configuration tools
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET
LISTENER =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET line added by Agent
[gridoracle53 ~]$ more $ORACLE_HOMEnetworkadminlistenerora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET line added by Agent
Check the status of the listener
[gridoracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150244
Copyright (c) 1991 2013 Oracle All rights reserved
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140422
Uptime 4 days 0 hr 58 min 21 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listeneralertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216052)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216032)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w
allet_directory=u01apporacle12cadminHP12Cxdb_wallet))(Presentation=HTTP)
(Session=RAW))
Services Summary
Service +ASM has 1 instance(s)
Instance +ASM1 status READY has 1 handler(s) for this service
Service -MGMTDBXDB has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
Service HP12C has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 1 instance(s)
Instance HP12C_2 status READY has 1 handler(s) for this service
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 2 handler(s) for this service
The command completed successfully
Then check the status of the SCAN listener
[gridoracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux Version 121010 - Production on 30-JUL-2013 150511
Copyright (c) 1991 2013 Oracle All rights reserved
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux Version 121010 - Production
Start Date 26-JUL-2013 140354
Uptime 4 days 1 hr 1 min 16 sec
Trace Level off
Security ON Local OS Authentication
SNMP OFF
Listener Parameter File u01appgrid12cnetworkadminlistenerora
Listener Log File
u01appbasediagtnslsnroracle52listener_scan1alertlogxml
Listening Endpoints Summary
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=17216034)(PORT=1521)))
Services Summary
Service HP12C has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Service HP12CXDB has 2 instance(s)
Instance HP12C_1 status READY has 1 handler(s) for this service
Instance HP12C_2 status READY has 1 handler(s) for this service
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60
Service _mgmtdb has 1 instance(s)
Instance -MGMTDB status READY has 1 handler(s) for this service
The command completed successfully
And finally we can check the srvctl value for the SCAN service
[gridoracle52 ~]$ srvctl config scan
SCAN name oracle34 Network 1
Subnet IPv4 172160025525500eth0
Subnet IPv6
SCAN 0 IPv4 VIP 17216034
Cluster verification
Cluster verification utility
In the $ORA_CRS_HOMEbin directory you will find a Cluster Verification Utility (CVU) validation tool called cluvfy
CVU goals
bull To verify if we have a well formed cluster for RAC installation configuration and operation
bull Full stack verification
bull Non-intrusive verification
bull Easy to use interface
bull Supports all RAC platforms configurations - well-defined uniform behavior
CVU non-goals
bull Does not perform any cluster or RAC operation
bull Does not take any corrective action following the failure of a verification task
bull Does not enter into areas of performance tuning or monitoring
bull Does not attempt to verify the internals of a cluster database
[gridoracle52 ~]$ cluvfy comp -list
Valid Components are
nodereach checks reachability between nodes
nodecon checks node connectivity
cfs checks CFS integrity
ssa checks shared storage accessibility
space checks space availability
sys checks minimum system requirements
clu checks cluster integrity
clumgr checks cluster manager integrity
ocr checks OCR integrity
olr checks OLR integrity
ha checks HA integrity
freespace checks free space in CRS Home
crs checks CRS integrity
nodeapp checks node applications existence
admprv checks administrative privileges
peer compares properties with peers
software checks software distribution
acfs checks ACFS integrity
asm checks ASM integrity
gpnp checks GPnP integrity
gns checks GNS integrity
scan checks SCAN configuration
ohasd checks OHASD integrity
clocksync checks Clock Synchronization
vdisk checks Voting Disk configuration and UDEV settings
healthcheck checks mandatory requirements andor best practice
recommendations
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61
dhcp checks DHCP configuration
dns checks DNS configuration
baseline collect and compare baselines
Some examples of the cluster verification utility
cluvfy stage -post hwos -n rac1rac2
It will check for hardware and operating system setup
Check the clusterware integrity
[gridoracle52 ~]$ cluvfy stage -post hwos -n oracle52oracle53 Identify the
OCR and the voting disk location
Post-check for hardware and operating system setup was successful
The crsctl command seen before helps to identify the location of the voting disk
[gridoracle52 ~]$ crsctl query css votedisk
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1 ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCLVOTING01) [VOTING]
Located 1 voting disk(s)
OCR does have its own tools ocrcheck for instance will tell the location of the cluster repository
[gridoracle52 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is
DeviceFile Name +VOTING
[gridoracle52 ~]$
[gridoracle52 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows
Version 4
Total space (kbytes) 409568
Used space (kbytes) 1492
Available space (kbytes) 408076
ID 573555284
DeviceFile Name +DATA
DeviceFile integrity check succeeded
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
DeviceFile not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Additional commands
To disable the cluster autostart
[rootoracle52 ~] homegridbash_profile
[rootoracle52 ~] $ORACLE_HOMEbincrsctl disable crs
CRS-4621 Oracle High Availability Services autostart is disabled
[rootoracle52 ~] $ORACLE_HOMEbincrsctl enable crs
CRS-4622 Oracle High Availability Services autostart is enabled
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
62
Appendix
Anaconda file
Kickstart file automatically generated by anaconda
version=DEVEL
install
cdrom
lang en_USUTF-8
keyboard us
network --onboot no --device eth0 --bootproto dhcp --noipv6
network --onboot no --device eth1 --bootproto dhcp --noipv6
network --onboot no --device eth2 --bootproto dhcp --noipv6
network --onboot no --device eth3 --bootproto dhcp --noipv6
network --onboot no --device eth4 --bootproto dhcp --noipv6
network --onboot no --device eth5 --bootproto dhcp --noipv6
network --onboot no --device eth6 --bootproto dhcp --noipv6
network --onboot no --device eth7 --bootproto dhcp --noipv6
rootpw --iscrypted
$6$k08kFoDHeE5o2rJU$wTwi1LVzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh
2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --enforcing
timezone --utc EuropeBerlin
bootloader --location=mbr --driveorder=mpatha --append=crashkernel=auto rhgb
quiet
The following is the partition information you requested
Note that any partitions you deleted are not expressed
here so unless you clear all partitions first this is
not guaranteed to work
clearpart --none
part boot --fstype=ext4 --asprimary --size=200
part --fstype=ext4 --size=40000
part swap --size=4096
packages
additional-devel
base
client-mgmt-tools
compat-libraries
console-internet
core
debugging
basic-desktop
desktop-debugging
desktop-platform
desktop-platform-devel
directory-client
general-desktop
graphical-admin-tools
hardware-monitoring
internet-browser
java-platform
kde-desktop
large-systems
legacy-x
network-file-system-client
performance
perl-runtime
server-platform
server-platform-devel
server-policy
system-admin-tools
libXinerama-devel
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
63
openmotif-devel
libXmu-devel
xorg-x11-proto-devel
startup-notification-devel
libgnomeui-devel
libbonobo-devel
libXau-devel
libgcrypt-devel
popt-devel
libdrm-devel
libXrandr-devel
libxslt-devel
libglade2-devel
gnutls-devel
mtools
pax
python-dmidecode
oddjob
wodim
sgpio
genisoimage
device-mapper-persistent-data
abrt-gui
qt-mysql
desktop-file-utils
samba-winbind
certmonger
pam_krb5
krb5-workstation
openmotif
xterm
xorg-x11-xdm
libXmu
libXp
perl-DBD-SQLite
end
Grid user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11$ORACLE_HOMEbin
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
s
ropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsam
b
abinusrucb
PATH=$PATH$HOMEOPatch
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
64
ORACLE_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORA
CLE_HOMEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Oracle user environment setting
bash_profile
Get the aliases and functions
if [ -f ~bashrc ] then
~bashrc
fi
User specific environment and startup programs
PATH=$PATH$HOMEbin
export PATH
export ORACLE_HOME ORACLE_BASE GRID_HOME
ORACLE_BASE=u01apporacle
ORACLE_HOME=u01apporacle12c
GRID_HOME=u01appgrid12c
PATH=$PATH$HOMEbin
export PATH
PATH=$PATHusrbinX11
PATH=$PATH$ORACLE_HOMEbin$HOMEOPatch
PATH=$PATHbinusrbinusrsbinetcoptbinusrccsbinusrlocalbinu
sropenwinbinoptlocalGNUbinoptlocalbinoptNSCPnavbinusrlocalsa
mbabinusrucb
export ORACLE_SID=
export ORACLE_TERM=xterm
export ORA_NLS33=$ORACLE_HOMEocommonnlsadmindata
export LD_LIBRARY_PATH=$ORACLE_HOMEliblibusrlibusropenwinlib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATHusrtdlibusrucblibusrlocallib$ORACLE
_HOMElib
export
CLASSPATH=$ORACLE_HOMEJRE$ORACLE_HOMEjlib$ORACLE_HOMErdbmsjlib$ORACLE_HO
MEnetworkjlib
export TMPDIR=tmp
export TEMP=tmp
export NLS_LANG=AMERICAN_AMERICAUS7ASCII
export LANG=C
umask 022
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65
Summary
HP continues to be the leader of installed servers running Oracle Wersquore extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oraclersquos software As a leader in Oracle database market share HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications
Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions tested and validated to make your life easier
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information
Oracle certification matrix httpssupportoraclecom
Oracle 12c database documentation oraclecomplsdb121homepage
Oracle Technology Network (OTN) RAC oraclecomtechnetworkdatabaseclusteringoverviewindexhtml
HP Reference Architectures for Oracle Grid on the HP BladeSystem httph71028www7hpcomenterprisecache494866-0-0-0-121html
Fibre Channel Host Bus Adapters (SAN connectivity) httph18006www1hpcomstoragesaninfrastructurehbahtml
Linux drivers for ProLiant httph18013www1hpcomproductsserverslinuxhplinuxcerthtml
Device mapper reference guide (access requires an HP Passport username and password) httph20272www2hpcomPagesspock2HtmlaspxhtmlFile=an_solutions_linuxhtml
Oracle ASMLib packages oraclecomtechnetworkserver-storagelinuxasmlibrhel6-1940776html
ASMLib and Multipathing httpbizsupport1austinhpcombcdocssupportSupportManualc01725586c01725586pdf
Device mapper documentation httph20000www2hpcombizsupportTechSupportDocumentIndexjsplang=enampcc=usampprodClassId=-1ampcontentType=SupportManualampprodTypeId=18964ampprodSeriesId=3559651
Linux certification and support matrix ndash HP ProLiant server httph18004www1hpcomproductsserverslinuxhplinuxcerthtml
Red Hat ASMLib page httprhnredhatcomerrataRHEA-2013-0554html
Red Hat iptables setting httpsaccessredhatcomsitedocumentationen-USRed_Hat_Enterprise_Linux6htmlIdentity_Management_Guidetrust-requirementshtml
HP Software Delivery Repository httpdownloadslinuxhpcomSDR
To help us improve our documents please provide feedback at hpcomsolutionsfeedback
Sign up for updates
hpcomgogetupdated
copy Copyright 2013 Hewlett-Packard Development Company LP The information contained herein is subject to change without notice The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as
constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein
Oracle and Java are registered trademarks of Oracle andor its affiliates UNIX is a registered trademark of The Open Group
4AA4-8504ENW September 2013