26
Page 1 van 26 Upgrade Grid Infrastructure from 11.2.0.2 to 11.2.0.3 on Linux Created by : Hans Camu Date : 29 August 2012 http://camoraict.wordpress.com In this article I will describe the steps to upgrade an Oracle 11.2.0.2 Grid Infrastructure cluster on Linux to version 11.2.0.3. In this article a 2node cluster with nodes RAC1 and RAC2 is used for this purpose. Unlike most other articles I will not describe how to use the OUI to install the software and do the upgrade. For this purpose I will execute a software only installation of the GI software (for which the OUI will be used, but there it ends) and then also apply some patches before the actual upgrade. The upgrade itself will be executed using the silent option of the config.sh tool. I will show you how to check the prerequisites to upgrade the Grid Infrastructure software. The upgrade of the RDBMS software and the upgrade of the database will NOT be discussed here, but in a next article. Before you start upgrading make sure patch 12539000: 11203:ASM UPGRADE FAILED ON FIRST NODE WITH ORA03113 is installed. Without this patch you will not be able to perform a successful upgrade to 11.2.0.3. This patch is available on top of the base 11.2.0.2.0 version and a version is available for GI PSU 1 until GI PSU 5. The patch is included from GI PSU 6 and above. To upgrade to 11.2.0.3 the following steps are involved: 1. Meet the 11.2.0.3 prerequisites 2. Perform a software only installation of the base 11.2.0.3 software 3. Apply latest GI PSU (11.2.0.3.3) and, if needed, some additional patches to the GI software 4. Run the cluster verification utility to check the prerequisites for the upgrade 5. Run the config.sh in silent mode to configure the cluster using the new 11.2.0.3 software 6. Run the rootupgrade.sh to actually upgrade to version 11.2.0.3

Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Embed Size (px)

Citation preview

Page 1: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

 

Page  1  van  26    

Upgrade  Grid  Infrastructure  from  11.2.0.2  to  11.2.0.3  on  Linux     Created by : Hans Camu Date : 29 August 2012 http://camoraict.wordpress.com    In  this  article  I  will  describe  the  steps  to  upgrade  an  Oracle  11.2.0.2  Grid  Infrastructure  cluster  on  Linux  to  version  11.2.0.3.  In  this  article  a  2-­‐node  cluster  with  nodes  RAC1  and  RAC2  is  used  for  this  purpose.  Unlike  most  other  articles  I  will  not  describe  how  to  use  the  OUI  to  install  the  software  and  do  the  upgrade.  For  this  purpose  I  will  execute  a  software  only  installation  of  the  GI  software  (for  which  the  OUI  will  be  used,  but  there  it  ends)  and  then  also  apply  some  patches  before  the  actual  upgrade.  The  upgrade  itself  will  be  executed  using  the  silent  option  of  the  config.sh  tool.  I  will  show  you  how  to  check  the  prerequisites  to  upgrade  the  Grid  Infrastructure  software.    The  upgrade  of  the  RDBMS  software  and  the  upgrade  of  the  database  will  NOT  be  discussed  here,  but  in  a  next  article.    Before  you  start  upgrading  make  sure  patch  12539000:  11203:ASM  UPGRADE  FAILED  ON  FIRST  NODE  WITH  ORA-­‐03113  is  installed.    Without  this  patch  you  will  not  be  able  to  perform  a  successful  upgrade  to  11.2.0.3.  This  patch  is  available  on  top  of  the  base  11.2.0.2.0  version  and  a  version  is  available  for  GI  PSU  1  until  GI  PSU  5.  The  patch  is  included  from  GI  PSU  6  and  above.    To  upgrade  to  11.2.0.3  the  following  steps  are  involved:  1. Meet  the  11.2.0.3  prerequisites  2. Perform  a  software  only  installation  of  the  base  11.2.0.3  software  3. Apply  latest  GI  PSU  (11.2.0.3.3)  and,  if  needed,  some  additional  patches  to  the  

GI  software  4. Run  the  cluster  verification  utility  to  check  the  prerequisites  for  the  upgrade  5. Run  the  config.sh  in  silent  mode  to  configure  the  cluster  using  the  new  

11.2.0.3  software  6. Run  the  rootupgrade.sh  to  actually  upgrade  to  version  11.2.0.3    

Page 2: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  2  van  26      

1. Meet  the  11.2.0.3  prerequisites    Before  we  can  start,  the  following  patches  need  to  be  downloaded:  • Patch  10404530:  Oracle  patch  set  11.2.0.3  • Patch  13919095:  GI  PSU  7  (July  2012)  • Patch  6880880:  The  latest  OPatch  version  • Patch  12539000  or  PSU  6  or  later:  a  prerequisite  patch  for  11.2.0.2    Before  we  start  we  want  to  know  exactly  at  which  version  we  are  running  at  this  moment:    oracle@rac1::/home/oracle $ /u01/app/grid/11.2.0.2/bin/crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.2.0.2.0] oracle@rac1::/home/oracle $ /u01/app/grid/11.2.0.2/bin/crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [11.2.0.2.0] oracle@rac1::/home/oracle $ /u01/app/grid/11.2.0.2/bin/crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.2.0]

 As  you  can  see,  we  are  running  version  11.2.0.2  at  this  moment.    

2. Perform  a  software  only  installation  of  the  base  11.2.0.3  software    As  from  11gR2  an  upgrade  of  GI  can  only  be  executed  as  an  out-­‐of-­‐place  upgrade.  This  means  the  current  GI_HOME  will  not  be  updated,  but  a  new  GI_HOME  will  be  crated.  The  upgrade  itself  will  be  done  by  switching  from  the  old  to  the  new  ORACLE_HOME  and  activate  the  new  software.  This  is  done  by  executing  an  upgrade  script  (rootupgrade.sh)  as  root.    Before  11.2.0.3  software  only  installation,  create  the  new  GI_HOME  directory  and  set  it’s  permissions.    root@rac1::/root $ mkdir /u01/app/grid/11.2.0.3 $ chown oracle:dba /u01/app/grid/11.2.0.3 $ chmod 755 /u01/app/grid/11.2.0.3

 Now  start  the  OUI  to  install  the  new  GI  software:    oracle@rac1::/home/oracle $ cd /stageDir/11.2.0.3/grid oracle@rac1::/stageDir/11.2.0.3/grid $ ./runInstaller

   

Page 3: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  3  van  26      

   Click  Next.    

   Choose  the  option  Install  Oracle  Grid  Infrastructure  Software  Only.  Click  Next.  

Page 4: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  4  van  26      

   Click  Next  (choose  another  language  first,  if  you  want  to).    

   Select  the  name  for  all  groups  and  click  Next.    

Page 5: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  5  van  26      

   It  is  recommended  to  define  separate  OS  groups,  but  I  chose  not  to  do  so.  Click  Yes.    

   Specify  the  GI_HOME  directory.  Is  this  directory  the  software  will  be  installed.  Click  Next.  

Page 6: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  6  van  26      

   Wait  while  some  verification  checks  are  executed.    

   Click  Install  to  start  installing  the  GI  software.    

Page 7: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  7  van  26      

   Wait  while  the  software  is  installed.    

 The  root.sh  script  must  be  executed  now:  

Page 8: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  8  van  26      

 root@rac1::/root $ /u01/app/grid/11.2.0.3/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/grid/11.2.0.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user: /u01/app/grid/11.2.0.3/perl/bin/perl -I/u01/app/grid/11.2.0.3/perl/lib -I/u01/app/grid/11.2.0.3/crs/install /u01/app/grid/11.2.0.3/crs/install/roothas.pl To configure Grid Infrastructure for a Cluster execute the following command: /u01/app/grid/11.2.0.3/crs/config/config.sh This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

   Now  click  OK.    

Page 9: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  9  van  26      

   Click  Close  to  exit  the  installer.    

3. Apply  latest  GI  PSU  (11.2.0.3.3)  and,  if  needed,  some  additional  patches  to  the  GI  software  

 The  base  GI  software  is  now  installed.  The  next  step  is  to  apply  some  additional  patches.  We  will  start  will  updating  the  OPatch  utility  to  the  latest  version.    oracle@rac1::/home/oracle $ cd /stageDir/11.2.0.3/patches/zip oracle@rac1::/stageDir/11.2.0.3/patches/zip $ unzip p6880880_112000_Linux-x86-64.zip -d /u01/app/grid/11.2.0.3 Archive: p6880880_112000_Linux-x86-64.zip creating: /u01/app/grid/11.2.0.3/OPatch/oplan/ inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/README.html inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/README.txt creating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/ inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/oplan.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/oracle.oplan.classpath.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/automation.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/OsysModel.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/EMrepoDrivers.jar creating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/apache-commons/ inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/apache-commons/commons-cli-1.0.jar creating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/jaxb/ inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/jaxb/activation.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/jaxb/jaxb-api.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/jaxb/jaxb-impl.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/jaxb/jsr173_1.0_api.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/osysmodel-utils.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/jlib/CRSProductDriver.jar inflating: /u01/app/grid/11.2.0.3/OPatch/oplan/oplan

Page 10: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  10  van  26    

replace /u01/app/grid/11.2.0.3/OPatch/docs/FAQ? [y]es, [n]o, [A]ll, [N]one, [r]ename: A inflating: /u01/app/grid/11.2.0.3/OPatch/docs/FAQ inflating: /u01/app/grid/11.2.0.3/OPatch/docs/Users_Guide.txt inflating: /u01/app/grid/11.2.0.3/OPatch/docs/Prereq_Users_Guide.txt inflating: /u01/app/grid/11.2.0.3/OPatch/jlib/opatch.jar inflating: /u01/app/grid/11.2.0.3/OPatch/jlib/opatchsdk.jar inflating: /u01/app/grid/11.2.0.3/OPatch/jlib/oracle.opatch.classpath.jar inflating: /u01/app/grid/11.2.0.3/OPatch/jlib/oracle.opatch.classpath.unix.jar inflating: /u01/app/grid/11.2.0.3/OPatch/jlib/oracle.opatch.classpath.windows.jar inflating: /u01/app/grid/11.2.0.3/OPatch/opatchprereqs/opatch/opatch_prereq.xml inflating: /u01/app/grid/11.2.0.3/OPatch/opatchprereqs/opatch/rulemap.xml inflating: /u01/app/grid/11.2.0.3/OPatch/opatchprereqs/opatch/runtime_prereq.xml inflating: /u01/app/grid/11.2.0.3/OPatch/opatchprereqs/oui/knowledgesrc.xml inflating: /u01/app/grid/11.2.0.3/OPatch/opatchprereqs/prerequisite.properties inflating: /u01/app/grid/11.2.0.3/OPatch/opatch inflating: /u01/app/grid/11.2.0.3/OPatch/opatch.bat inflating: /u01/app/grid/11.2.0.3/OPatch/opatch.pl inflating: /u01/app/grid/11.2.0.3/OPatch/opatch.ini inflating: /u01/app/grid/11.2.0.3/OPatch/opatchdiag inflating: /u01/app/grid/11.2.0.3/OPatch/opatchdiag.bat inflating: /u01/app/grid/11.2.0.3/OPatch/emdpatch.pl inflating: /u01/app/grid/11.2.0.3/OPatch/README.txt creating: /u01/app/grid/11.2.0.3/OPatch/ocm/bin/ inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/bin/emocmrsp creating: /u01/app/grid/11.2.0.3/OPatch/ocm/doc/ creating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/ inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/emocmclnt-14.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/emocmclnt.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/emocmcommon.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/http_client.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/jcert.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/jnet.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/jsse.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/log4j-core.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/osdt_core3.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/osdt_jce.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/regexp.jar inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/lib/xmlparserv2.jar extracting: /u01/app/grid/11.2.0.3/OPatch/ocm/ocm.zip inflating: /u01/app/grid/11.2.0.3/OPatch/ocm/ocm_platforms.txt creating: /u01/app/grid/11.2.0.3/OPatch/crs/ creating: /u01/app/grid/11.2.0.3/OPatch/crs/log/ extracting: /u01/app/grid/11.2.0.3/OPatch/crs/log/dummy inflating: /u01/app/grid/11.2.0.3/OPatch/crs/auto_patch.pl inflating: /u01/app/grid/11.2.0.3/OPatch/crs/crsconfig_lib.pm inflating: /u01/app/grid/11.2.0.3/OPatch/crs/crsdelete.pm inflating: /u01/app/grid/11.2.0.3/OPatch/crs/crspatch.pm inflating: /u01/app/grid/11.2.0.3/OPatch/crs/installPatch.excl inflating: /u01/app/grid/11.2.0.3/OPatch/crs/oracss.pm inflating: /u01/app/grid/11.2.0.3/OPatch/crs/patch112.pl inflating: /u01/app/grid/11.2.0.3/OPatch/crs/s_crsconfig_defs inflating: /u01/app/grid/11.2.0.3/OPatch/crs/s_crsconfig_lib.pm

 Now  OPatch  is  up  to  date  we  can  install  the  latest  GI  PSU.  Will  writing  this  article  the  latest  GI  PSU  is  PSU  7  (patch  13919095).  This  patch  must  be  installed  in  2  steps.  The  first  step  will  install  the  GI  part  of  the  PSU  and  the  second  the  RDBMS  part  of  the  PSU.  This  part  must  also  be  installed  on  the  GI_HOME.      

Page 11: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  11  van  26    

oracle@rac1::/stageDir/11.2.0.3/patches/GI $ /u01/app/grid/11.2.0.3/OPatch/opatch napply -oh /u01/app/grid/11.2.0.3 -local /stageDir/11.2.0.3/patches/GI/13919095/13919095 Oracle Interim Patch Installer version 11.2.0.3.0 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/grid/11.2.0.3 Central Inventory : /u01/app/oraInventory from : /u01/app/grid/11.2.0.3/oraInst.loc OPatch version : 11.2.0.3.0 OUI version : 11.2.0.3.0 Log file location : /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-28-12PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 13919095 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name. Visit http://www.oracle.com/support/policies.html for details. Email address/User Name: <ENTER> You have not provided an email address for notification of security issues. Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/grid/11.2.0.3') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Applying interim patch '13919095' to OH '/u01/app/grid/11.2.0.3' Patching component oracle.crs, 11.2.0.3.0... Patching component oracle.usm, 11.2.0.3.0... Verifying the update... OPatch found the word "warning" in the stderr of the make command. Please look at this stderr. You can re-run this make command. Stderr output: ins_srvm.mk:68: warning: overriding commands for target `libsrvm11.so' ins_srvm.mk:31: warning: ignoring old commands for target `libsrvm11.so' ins_srvm.mk:71: warning: overriding commands for target `libsrvmocr11.so'

Page 12: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  12  van  26    

ins_srvm.mk:34: warning: ignoring old commands for target `libsrvmocr11.so' ins_srvm.mk:74: warning: overriding commands for target `libsrvmhas11.so' ins_srvm.mk:37: warning: ignoring old commands for target `libsrvmhas11.so' Patch 13919095 successfully applied. OPatch Session completed with warnings. Log file location: /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-28-12PM_1.log OPatch completed with warnings.

 The  warning  can  safely  be  ignored.  Now  we  can  install  the  RDBMS  part  of  the  PSU:    oracle@rac1::/stageDir/11.2.0.3/patches/GI $ /u01/app/grid/11.2.0.3/OPatch/opatch napply -oh /u01/app/grid/11.2.0.3 -local /stageDir/11.2.0.3/patches/GI/13919095/13923374 Oracle Interim Patch Installer version 11.2.0.3.0 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/grid/11.2.0.3 Central Inventory : /u01/app/oraInventory from : /u01/app/grid/11.2.0.3/oraInst.loc OPatch version : 11.2.0.3.0 OUI version : 11.2.0.3.0 Log file location : /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-33-57PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 13343438 13696216 13923374 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name. Visit http://www.oracle.com/support/policies.html for details. Email address/User Name: <ENTER> You have not provided an email address for notification of security issues. Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/grid/11.2.0.3') Is the local system ready for patching? [y|n]

Page 13: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  13  van  26    

y User Responded with: Y Backing up files... Applying sub-patch '13343438' to OH '/u01/app/grid/11.2.0.3' Patching component oracle.rdbms.rsf, 11.2.0.3.0... Patching component oracle.rdbms, 11.2.0.3.0... Patching component oracle.rdbms.dbscripts, 11.2.0.3.0... Verifying the update... Applying sub-patch '13696216' to OH '/u01/app/grid/11.2.0.3' ApplySession: Optional component(s) [ oracle.sysman.console.db, 11.2.0.3.0 ] , [ oracle.sysman.oms.core, 10.2.0.4.4 ] not present in the Oracle Home or a higher version is found. Patching component oracle.rdbms.rsf, 11.2.0.3.0... Patching component oracle.rdbms, 11.2.0.3.0... Patching component oracle.sdo.locator, 11.2.0.3.0... Verifying the update... Applying sub-patch '13923374' to OH '/u01/app/grid/11.2.0.3' ApplySession: Optional component(s) [ oracle.sysman.console.db, 11.2.0.3.0 ] , [ oracle.network.cman, 11.2.0.3.0 ] not present in the Oracle Home or a higher version is found. Patching component oracle.rdbms.rsf, 11.2.0.3.0... Patching component oracle.rdbms, 11.2.0.3.0... Patching component oracle.rdbms.dbscripts, 11.2.0.3.0... Patching component oracle.network.rsf, 11.2.0.3.0... Patching component oracle.network.listener, 11.2.0.3.0... Verifying the update... Composite patch 13923374 successfully applied. Log file location: /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-33-57PM_1.log OPatch succeeded.

 The  GI  PSU  is  now  successfully  installed.  We  will  install  one  other  additional  patch.  This  is  a  patch  for  a  bug  I  have  run  into,  but  this  can  be  any  other  patch  you  need,  based  on  which  patched  are  already  installed  in  you  current  11.2.0.2  GI_HOME  and  not  fixed  yet  in  11.2.0.3.  So  the  next  is  just  an  example  and  can  be  skipped.    oracle@rac1::/stageDir/11.2.0.3/patches $ /u01/app/grid/11.2.0.3/OPatch/opatch apply -oh /u01/app/grid/11.2.0.3 -local /stageDir/11.2.0.3/patches/13242070 Oracle Interim Patch Installer version 11.2.0.3.0 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/grid/11.2.0.3

Page 14: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  14  van  26    

Central Inventory : /u01/app/oraInventory from : /u01/app/grid/11.2.0.3/oraInst.loc OPatch version : 11.2.0.3.0 OUI version : 11.2.0.3.0 Log file location : /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-36-24PM_1.log Applying interim patch '13242070' to OH '/u01/app/grid/11.2.0.3' Verifying environment and performing prerequisite checks... All checks passed. Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name. Visit http://www.oracle.com/support/policies.html for details. Email address/User Name: <ENTER> You have not provided an email address for notification of security issues. Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/grid/11.2.0.3') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Patching component oracle.rdbms, 11.2.0.3.0... Verifying the update... Patch 13242070 successfully applied Log file location: /u01/app/grid/11.2.0.3/cfgtoollogs/opatch/opatch2012-08-28_17-36-24PM_1.log OPatch succeeded.

 The  GI  software  is  now  installed  and  patched  to  meet  your  requirements.  But  it’s  still  only  available  on  this  node.  To  make  it  available  to  all  the  other  nodes  in  the  cluster,  we  will  create  a  tar-­‐file  of  the  new  GI  software  and  then  distribute  it.    Because  no  root  scripts  are  whatever  have  run  yet,  no  node  specific  directories  or  log  files  are  in  the  new  GI_HOME  yet.  So  the  complete  directory  can  be  zipped.    oracle@rac1::/home/oracle $ cd /u01/app/grid $ ls -ltr total 8 drwxr-xr-x 71 root dba 4096 Aug 28 16:28 11.2.0.2 drwxr-xr-x 59 oracle dba 4096 Aug 28 17:36 11.2.0.3 oracle@rac1::/u01/app/grid $ tar -zcvf /stageDir/11.2.0.3/gi_112033.tgz 11.2.0.3 1.2.0.3/ 11.2.0.3/evm/

Page 15: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  15  van  26    

… … 11.2.0.3/mesg/kfndgus.msb 11.2.0.3/log/ 11.2.0.3/log/crs/

 Now  the  11.2.0.3  GI_HOME  can  be  deployed  to  all  the  other  nodes  in  the  cluster.  First  create  the  new  GI_HOME  directory  and  set  it’s  permissions.    root@rac1::/root $ mkdir /u01/app/grid/11.2.0.3 $ chown oracle:dba /u01/app/grid/11.2.0.3 $ chmod 755 /u01/app/grid/11.2.0.3

 And  now  unzip  the  file.    oracle@rac2::/home/oracle $ tar -xvf /stageDir/11.2.0.3/gi_112033.tgz -C /u01/app/grid 11.2.0.3/ 11.2.0.3/evm/ 11.2.0.3/evm/lib/ 11.2.0.3/evm/lib/libevmd.a 11.2.0.3/evm/init/ 11.2.0.3/evm/admin/ … … 11.2.0.3/mesg/ 11.2.0.3/mesg/kfndgus.msb 11.2.0.3/log/ 11.2.0.3/log/crs/

 Register the new GI_HOME in the oraInventory:  oracle@rac2::/home/oracle $ /u01/app/grid/11.2.0.3/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/grid/11.2.0.3 ORACLE_HOME_NAME=Ora11g_gridinfrahome2 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=`hostname -s` Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 8191 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory Please execute the 'null' script at the end of the session. 'AttachHome' was successful.

 If  you  have  more  nodes  in  your  cluster,  repeat  these  3  steps  on  all  nodes!  

 

Page 16: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  16  van  26    

4. Run  the  cluster  verification  utility  to  check  the  prerequisites  for  the  upgrade  

 Before  we  will  run  the  actual  upgrade,  we  can  now  first  determine  if  we  meet  al  prerequisites  to  execute  a  successful  upgrade.  For  this  purpose  we  run  the  Cluster  Verification  Utility.    oracle@rac1::/u01/app/grid $ cd /stageDir/11.2.0.3/grid oracle@rac1::/stageDir/11.2.0.3/grid $ ./runcluvfy.sh stage -pre crsinst -upgrade -n all -rolling -src_crshome /u01/app/grid/11.2.0.2 -dest_crshome /u01/app/grid/11.2.0.3 -dest_version 11.2.0.3.0 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "rac2" Checking user equivalence... User equivalence check passed for user "oracle" Checking CRS user consistency CRS user consistency check successful Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.56.0" Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" TCP connectivity check passed for subnet "10.0.0.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.56.0". Subnet mask consistency check passed for subnet "10.0.0.0". Subnet mask consistency check passed. Node connectivity check passed Checking multicast communication... Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Page 17: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  17  van  26    

Check of multicast communication passed. Checking OCR integrity... OCR integrity check passed Checking ASMLib configuration. Check for ASMLib configuration passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "rac2:/u01/app/grid/11.2.0.3" Free disk space check passed for "rac1:/u01/app/grid/11.2.0.3" Free disk space check passed for "rac2:/tmp" Free disk space check passed for "rac1:/tmp" Check for multiple users with UID value 500 passed User existence check passed for "oracle" Group existence check passed for "dba" Membership check for user "oracle" in group "dba" [as Primary] passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" Check for Oracle patch "12539000" in home "/u01/app/grid/11.2.0.2" passed There are no oracle patches required for home "/u01/app/grid/11.2.0.3". System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make" Package existence check passed for "binutils" Package existence check passed for "gcc(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "glibc(x86_64)" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "elfutils-libelf(x86_64)" Package existence check passed for "elfutils-libelf-devel" Package existence check passed for "glibc-common" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "glibc-headers" Package existence check passed for "gcc-c++(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "ksh" Check for multiple users with UID value 0 passed

Page 18: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  18  van  26    

Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Package existence check passed for "cvuqdisk" Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed Core file name pattern consistency check passed. User "oracle" is not part of "root" group. Check passed Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes All nodes have one search entry defined in file "/etc/resolv.conf" The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes UDev attributes check for OCR locations started... UDev attributes check passed for OCR locations UDev attributes check for Voting Disk locations started... UDev attributes check passed for Voting Disk locations Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... ASM Running check passed. ASM is running on all specified nodes Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed Pre-check for cluster services setup was successful.

 The  check  must  be  successful.  If  not,  then  the  prerequisites  that  are  not  met  must  be  fixed  first.  Rerun  the  check  until  it  ends  successful.    

Page 19: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  19  van  26    

5. Run  the  config.sh  in  silent  mode  to  configure  the  cluster  using  the  new  11.2.0.3  software  

 To  configure  the  cluster  we  must  run  Oracle  Clusterware  Configuration  Wizard.  This  time  we  will  not  run  the  GUI,  but  will  execute  a  silent  configuration.  When  using  this  option,  several  parameters  must  be  passed  to  the  config.sh  and  I  will  first  discuss  these  parameters:    -silent   The  tool  will  execute  in  a  silent,  

non-­‐interactive,  mode  ORACLE_HOSTNAME The  hostname  the  tool  will  run  

from  INVENTORY_LOCATION Specifies  the  location  which  

holds  the  inventory  files  SELECTED_LANGUAGES Specify  the  languages  in  which  

the  components  will  be  installed  oracle.install.option Specify  the  installation  option  ORACLE_BASE Specify  the  complete  path  of  the  

Oracle  Base  ORACLE_HOME Specify  the  complete  path  of  the  

Oracle  Home  oracle.install.asm.OSDBA The  DBA_GROUP  is  the  OS  group  

which  is  to  be  granted  OSDBA  privileges  

oracle.install.asm.OSOPER The  OPER_GROUP  is  the  OS  group  which  is  to  be  granted  OSOPER  privileges  

oracle.install.asm.OSASM The  OSASM_GROUP  is  the  OS  group,  which  is  to  be  granted  OSASM  privileges.  This  must  be  different  than  the  previous  two  

oracle.install.crs.config.clusterNodes Specify  a  list  of  public  node  names,  and  virtual  hostnames  that  have  to  be  part  of  the  cluster.  The  list  should  a  comma-­‐separated  list  of  nodes.    Each  entry  in  the  list  should  be  a  colon-­‐separated  string  that  contains  2  fields  

oracle.install.crs.upgrade.clusterNodes Specify  nodes  for  Upgrade  oracle.install.asm.upgradeASM For  RAC-­‐ASM  only.  

Value  should  be  'true'  while  upgrading  Cluster  ASM  of  version  11gR2(11.2.0.1.0)  and  above  

     

Page 20: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  20  van  26    

Run  the  config.sh  in  silent  mode  to  configure  Grid  Infrastructure  11.2.0.3:    oracle@rac1::/home/oracle $ /u01/app/grid/11.2.0.3/crs/config/config.sh -silent ORACLE_HOSTNAME=rac1.camoraict.com INVENTORY_LOCATION=/u01/app/oraInventory SELECTED_LANGUAGES=en oracle.install.option=UPGRADE ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/grid/11.2.0.3 oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=dba oracle.install.asm.OSASM=asmadmin oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip oracle.install.crs.upgrade.clusterNodes=rac1,rac2 oracle.install.asm.upgradeASM=true As a root user, execute the following script(s): 1. /u01/app/grid/11.2.0.3/rootupgrade.sh Execute /u01/app/grid/11.2.0.3/rootupgrade.sh on the following nodes: [rac1, rac2] Successfully Setup Software.

   

6. Run  the  rootupgrade.sh  to  actually  upgrade  to  version  11.2.0.3    When  you  execute  the  rootupgrade.sh  script  (as  the  root  user  of  course),  you  actual  upgrade  is  performed.  It  is  highly  recommended  to  umount  all  the  ACFS  mounts  before  you  execute  the  script.  If  there  are  some  open  file  handles  to  an  ACFS  filesystem,  the  filesystem  can’t  be  unmounted  automatically,  resulting  in  the  upgrade  to  fail!    root@rac1::/root $ mount /dev/mapper/systemvg-rootlv on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/systemvg-usrlv on /usr type ext3 (rw) /dev/mapper/systemvg-tmplv on /tmp type ext3 (rw) /dev/mapper/systemvg-homelv on /home type ext3 (rw) /dev/mapper/systemvg-varlv on /var type ext3 (rw) /dev/mapper/u01vg-u01lv on /u01 type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /software on /media/sf_/software type vboxsf (gid=503,rw) /dev/asm/v_orabackup-238 on /orabackup1 type acfs (rw) /dev/asm/v_software-238 on /software type acfs (rw) root@rac1::/root $ umount.acfs /orabackup1 root@rac1::/root $ umount.acfs /software root@rac1::/root $ mount /dev/mapper/systemvg-rootlv on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620)

Page 21: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  21  van  26    

/dev/mapper/systemvg-usrlv on /usr type ext3 (rw) /dev/mapper/systemvg-tmplv on /tmp type ext3 (rw) /dev/mapper/systemvg-homelv on /home type ext3 (rw) /dev/mapper/systemvg-varlv on /var type ext3 (rw) /dev/mapper/u01vg-u01lv on /u01 type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /software on /media/sf_/software type vboxsf (gid=503,rw)

 Now  run  the  rootupgrade.sh  script  on  the  first  node  as  user  root:    root@rac1::/root $ /u01/app/grid/11.2.0.3/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/grid/11.2.0.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/grid/11.2.0.3/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation ASM upgrade has started on first node. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.DGACFS.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.DGDATA.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.DGFRA.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.DGGRID.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1' CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2' CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2' CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded CRS-2677: Stop of 'ora.DGDATA.dg' on 'rac1' succeeded CRS-2677: Stop of 'ora.DGFRA.dg' on 'rac1' succeeded CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded

Page 22: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  22  van  26    

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2' CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded CRS-2677: Stop of 'ora.DGACFS.dg' on 'rac1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded CRS-2677: Stop of 'ora.DGGRID.dg' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac1' CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1' CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac1' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.evmd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2673: Attempting to stop 'ora.diskmon' on 'rac1' CRS-2677: Stop of 'ora.diskmon' on 'rac1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 Check  if  the  ACFS  mounts  are  back  again:    oracle@rac1::/home/oracle $ mount /dev/mapper/systemvg-rootlv on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/systemvg-usrlv on /usr type ext3 (rw) /dev/mapper/systemvg-tmplv on /tmp type ext3 (rw) /dev/mapper/systemvg-homelv on /home type ext3 (rw) /dev/mapper/systemvg-varlv on /var type ext3 (rw) /dev/mapper/u01vg-u01lv on /u01 type ext3 (rw)

Page 23: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  23  van  26    

/dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /software on /media/sf_/software type vboxsf (gid=503,rw) /dev/asm/v_orabackup-238 on /orabackup1 type acfs (rw) /dev/asm/v_software-238 on /software type acfs (rw)

 Now  you  can  run  the  upgrade  script  on  the  other  nodes  in  the  cluster,  one  by  one.  Here  the  same,  make  sure  all  ACFS  mounts  are  umounted  first!    root@rac2::/root $ umount.acfs /orabackup1 root@rac2::/root $ umount.acfs /software root@rac1::/root root@rac2::/root $ mount /dev/mapper/systemvg-rootlv on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/systemvg-usrlv on /usr type ext3 (rw) /dev/mapper/systemvg-tmplv on /tmp type ext3 (rw) /dev/mapper/systemvg-homelv on /home type ext3 (rw) /dev/mapper/systemvg-varlv on /var type ext3 (rw) /dev/mapper/u01vg-u01lv on /u01 type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /software on /media/sf_/software type vboxsf (gid=503,rw)

   Now  run  the  rootupgade.sh  script  on  this  node:    root@rac2::/root $ /u01/app/grid/11.2.0.3/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/grid/11.2.0.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/grid/11.2.0.3/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2' CRS-2673: Attempting to stop 'ora.crsd' on 'rac2' CRS-2790: Starting shutdown of Cluster Ready Services-managed

Page 24: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  24  van  26    

resources on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2' CRS-2673: Attempting to stop 'ora.cvu' on 'rac2' CRS-2673: Attempting to stop 'ora.DGACFS.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.DGDATA.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.DGFRA.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.DGGRID.dg' on 'rac2' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2' CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2' CRS-2677: Stop of 'ora.scan3.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac1' CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac1' CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac1' CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded CRS-2677: Stop of 'ora.DGDATA.dg' on 'rac2' succeeded CRS-2677: Stop of 'ora.DGFRA.dg' on 'rac2' succeeded CRS-2676: Start of 'ora.scan3.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac1' CRS-2676: Start of 'ora.rac2.vip' on 'rac1' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac1' CRS-2677: Stop of 'ora.DGACFS.dg' on 'rac2' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac1' succeeded CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cvu' on 'rac1' CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'rac1' CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded CRS-2677: Stop of 'ora.DGGRID.dg' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac2' CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2' CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2' CRS-2673: Attempting to stop 'ora.evmd' on 'rac2' CRS-2673: Attempting to stop 'ora.asm' on 'rac2' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2' CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded

Page 25: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  25  van  26    

CRS-2673: Attempting to stop 'ora.cssd' on 'rac2' CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'rac2' CRS-2673: Attempting to stop 'ora.crf' on 'rac2' CRS-2677: Stop of 'ora.diskmon' on 'rac2' succeeded CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2' CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2' CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed CRS-4133: Oracle High Availability Services has been stopped. OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. Started to upgrade the CRS. The CRS was successfully upgraded. Oracle Clusterware operating version was successfully set to 11.2.0.3.0 ASM upgrade has finished on last node. PRKO-2116 : OC4J is already enabled Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 Check  if  the  ACFS  mounts  are  back  again:    oracle@rac2::/home/oracle $ mount /dev/mapper/systemvg-rootlv on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/systemvg-usrlv on /usr type ext3 (rw) /dev/mapper/systemvg-tmplv on /tmp type ext3 (rw) /dev/mapper/systemvg-homelv on /home type ext3 (rw) /dev/mapper/systemvg-varlv on /var type ext3 (rw) /dev/mapper/u01vg-u01lv on /u01 type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /software on /media/sf_/software type vboxsf (gid=503,rw) /dev/asm/v_orabackup-238 on /orabackup1 type acfs (rw) /dev/asm/v_software-238 on /software type acfs (rw)

 Repeat  the  above  steps  for  all  nodes  in  the  cluster  and  make  sure  all  nodes  are  upgraded!        

Page 26: Upgrade grid-infrastructure-from-11-2-0-2-to-11-2-0-3-on-linux

Page  26  van  26    

Now  the  upgrade  script  has  been  successfully  executed  on  all  nodes  in  the  cluster  we  must  check  that  we  now  are  really  running  the  11.2.0.3  version:    oracle@rac1::/home/oracle $ /u01/app/grid/11.2.0.3/bin/crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.2.0.3.0] oracle@rac2::/home/oracle $ /u01/app/grid/11.2.0.3/bin/crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [11.2.0.3.0] oracle@rac2::/home/oracle $ /u01/app/grid/11.2.0.3/bin/crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0]

 Now  we  are  running  the  correct  upgraded  version  it  is  save  to  remove  the  binaries  of  the  old  version!    Congratulations,  you  have  successfully  upgraded  your  Grid  Infrastructure  cluster.