23
An Introduction to RAC System Test Planning Methods Ajith Narayanan ERP Advisor , Dell IT Bangalore, 2 9th June 2013

An introduction to_rac_system_test_planning_methods

Embed Size (px)

DESCRIPTION

Speaker:- Ajith Narayanan, AIOUG TechDay, 29th June, Bangalore, India

Citation preview

Page 1: An introduction to_rac_system_test_planning_methods

An Introduction to RAC System Test Planning Methods

Ajith NarayananERP Advisor , Dell IT

Bangalore, 29th June 2013

Page 2: An introduction to_rac_system_test_planning_methods

Ajith Narayanan

ERP Advisor

Dell IT

� 8.5 years of Oracle Apps & Oracle DBA experience.

� Blogger :- http://oracledbascriptsfromajith.blogspot.com

� Website Chair:- http://www.oracleracsig.org – Oracle RAC SIG

Who Am I?

Page 3: An introduction to_rac_system_test_planning_methods

Agenda

� Real Application Clusters Testing Objectives

� Oracle Technologies Used For Tests

� Test 1 :Planned Node Reboot

� Test 2 :Clusterware and Fencing

� Test 3 :Restart Failed Node

� Test 4 :Reboot All Nodes Same Time

� Test 5 :Unplanned Instance Failure

� Test 6 :Planned Instance Termination

� Test 7 :Clusterware and Fencing

� Test 8: Service Failover

� Test 9: Public Network Failure

� Test 10: Interconnect Network Failure

� Sample Cluster Callout Script

� Q & A

Page 4: An introduction to_rac_system_test_planning_methods

Real Application Clusters Testing Objectives

� To verify that the system has been installed and configured correctly. Check that nothing is in broken state.

� To Verify that basic functionality still works in a specific environment and for a specific workload.

� To make sure that the system will achieve its objectives, in particular, availability and performance objectives.

Page 5: An introduction to_rac_system_test_planning_methods

Oracle Technologies Used For Tests

� Fast Application Notification (FAN) – Notification mechanism that alerts application of service level changes of the database.

� Fast Connection Failover (FCF) – Utilizes FAN events to enable database clients to proactively react to down events by quickly failing over connections to surviving database instances.

� Transparent Application Failover (TAF) – Allows for connections to be automatically reestablished to a surviving database instance in the case that the instance servicing the initial connection should fail. TAF has the ability to fail over in-flight select statements (if configured) but insert, update and delete transactions will be rolled back.

� Runtime Connection Load Balancing (RCLB) – Provides intelligence about the current service level of the database instances to application connection pools. This increases the performance of the application by utilizing least loaded servers to service application requests and allows for dynamic workload balancing in the event of the loss of service by a database instance or increase of service by adding a database instance.

Page 6: An introduction to_rac_system_test_planning_methods

Test 1 :Planned Node Reboot

Procedure� Start client workload & Identify instance with most client connections � Reboot the node where the most loaded instance is running � For AIX, HPUX, Windows: “shutdown –r” , For Linux: “shutdown –r now” , For Solaris: “reboot”

Expected Results � The instances and other Clusterware resources go offline ( ‘SERVER’ field of crsctl stat res –t output) � The node VIP fails over the surviving nodes and will show a state of “INTERMEDIATE” with state_details of

“FAILED_OVER” � The SCAN VIP(s) that were running on the rebooted node will fail over to surviving nodes. � The SCAN Listener(s) running on that node will fail over to a surviving node. � Instance recovery is performed by another instance. � Services are moved to available instances� Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client types

and configuration). With TAF configured select statements should continue. Active DMLwill be aborted. � After the database reconfiguration, surviving instances continue processing their workload.

Measures � Time to detect node or instance failure. Time to complete instance recovery. Alert Log helps� Time to restore client activity to same level.� Time before failed instance is restarted automatically by Clusterware and is accepting new connections � Successful failover of the SCAN VIP(s) and SCAN Listener(s)

Page 7: An introduction to_rac_system_test_planning_methods

Procedure� Start client workload.

� Identify the node that is the OCR master using the following grep command from any of the nodes:

� grep -i "OCR MASTER" $GI_HOME/log/<node_name>/crsd/crsd.l*

NOTE: Windows users must manually review the $GI_HOME/log/<node_name>/crsd/crsd.l* logs to determine the OCR Master.

� Power off the node that is the OCR master.

NOTE: On many servers the power-off switch will perform a controlled shutdown, So we have to cut the power supply .

Expected Results � The instances and other Clusterware resources go offline ( ‘SERVER’ field of crsctl stat res –t output) � The node VIP fails over the surviving nodes and will show a state of “INTERMEDIATE” with state_details of

“FAILED_OVER” � The SCAN VIP(s) that were running on the rebooted node will fail over to surviving nodes. � The SCAN Listener(s) running on that node will fail over to a surviving node. � Instance recovery is performed by another instance. � Services are moved to available instances

� Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client types and configuration). With TAF configured select statements should continue. Active DMLwill be aborted.

� After the database reconfiguration, surviving instances continue processing their workload.

Test 2 :Unplanned Node Failure Of OCR Master

Page 8: An introduction to_rac_system_test_planning_methods

Procedure� ajithpathiyil2:/home/oracle[RAC1]$ srvctl start instance –d RAC –I RAC1

Expected Results� On clusters having 3 or fewer nodes, one of the SCAN VIPs and Listeners will be relocated to the restarted node when

the Oracle Clusterware starts.

� The VIP will migrate back to the restarted node.

� Services that had failed over as a result of the node failure will NOT automatically be relocated.

� Failed resources (asm, listener, instance, etc) will be restarted by the Clusterware.

Measures� Time for all resources to become available again, Check with “crsctl stat res –t”

Test 3 :Restart Failed Node

Page 9: An introduction to_rac_system_test_planning_methods

Procedure� Issue a reboot on all nodes at the same time

� For AIX, HPUX, Windows: ‘shutdown –r’

� For Linux: ‘shutdown –r now’

� For Solaris: ‘reboot’

Expected Results� All nodes, instances and resources are restarted without problems

Measures� Time for all resources to become available again, Check with “crsctl stat res –t”

Test 4 :Reboot All Nodes Same Time

Page 10: An introduction to_rac_system_test_planning_methods

Procedure

� Start client workload � Identify single database instance with the most client connections and abnormally terminate that instance:

For AIX, HPUX, Linux, Solaris:

Obtain the PID for the pmon process of the database instance: # ps –ef | grep pmonkill the pmon process: # kill –9 <pmon pid>

For Windows:

Obtain the thread ID of the pmon thread of the database instance by running: SQL> select b.name, p.spid from v$bgprocess b, v$process p where b.paddr=p.addr and b.name=’PMON’; Run orakill to kill the thread: cmd> orakill <SID> <Thread ID>

Test 5 :Unplanned Instance Failure

Page 11: An introduction to_rac_system_test_planning_methods

Expected Results� One of the other instances performs instance recovery

� Services are moved to available instances, if a preferred instance failed

� Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client types and configuration)

� After a short freeze, surviving instances continue processing the workload

� Failing instance will be restarted by Oracle Clusterware, unless this feature has been disabled

Measures� Time to detect instance failure

� Time to complete instance recovery. Check alert log for recovering instance

� Time to restore client activity to same level (assuming remaining nodes have sufficient capacity to run workload)

� Duration of database freeze during failover.

� Time before failed instance is restarted automatically by Oracle Clusterware and is accepting new connections

Test 5 :Unplanned Instance Failure

Page 12: An introduction to_rac_system_test_planning_methods

Procedure

� Issue a ‘shutdown abort’

Expected Results

� One other instance performs instance recovery � Services are moved to available instances, if a preferred instance failed � Client connections are moved / reconnected to surviving instances (Procedure and timings will depend on client

types and configuration) � The instance will NOT be automatically restarted by Oracle Clusterware due to the user invoked shutdown.

Measures

� Time to detect instance failure. � Time to complete instance recovery. Check alert log for recovering instance. � Time to restore client activity to same level (assuming remaining nodes have sufficient capacity to run workload). � The instance will NOT be restarted by Oracle Clusterware due to the user induced shutdown.

Test 6 :Planned Instance Termination

Page 13: An introduction to_rac_system_test_planning_methods

Test 7 : Clusterware and Fencing

Node fencing is a general concept used by computer clusters to forcefully remove a malfunctioning node from it. This preventive technique is a necessary measure to make sure no I/O from malfunctioning node can be done, thus preventing data corruptions and guaranteeing cluster integrity.

Procedure1. Start with a normal, running cluster with the database instances up and running.2. Monitor the logfiles for clusterware on each node. On each node, start a new window and run thefollowing command:

The network heartbeats are associated with a timeout called misscount, set from 11g Release 1 to 30.

ajithpathiyil1:/home/oracle[+ASM1] $crsctl get css misscount30

ajithpathiyil1:/home/oracle[+ASM1] $oifcfg getifbond0 192.168.78.51 global public bond1 10.10.0.0 global cluster_interconnect

ajithpathiyil1:/home/oracle[grid]$ tail -f /u01/grid/oracle/product/11.2.0/grid_1/log/ajithpathiyil2/crsd/crsd.l*ajithpathiyil1:/home/oracle[grid]$ tail -f /u01/grid/oracle/product/11.2.0/grid_1/log/‘hostname -s‘/cssd/ocssd.logajithpathiyil2:/home/oracle[grid]$ ifconfig eth1 down

Page 14: An introduction to_rac_system_test_planning_methods

Test 7 : Clusterware and Fencing

Expected Results� Following this command, watch the logfiles you began monitoring in step 2 above. You should see errors in those

logfiles and eventually (could take a minute or two, literally) you will observe one node reboot itself.

� If you used ifconfig to trigger a failure, then the node will rejoin the cluster and the instance should start automatically.

Alert Log[cssd(2864)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval. Removal of this node from cluster in 14.920 seconds… [cssd(2864)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 2.900 seconds [cssd(2864)]CRS-1609:This node is unable to communicate with other nodes in the cluster and is going down to p reserve cluster integrity

More debugging information is written to the ocssd.bin process log file: [CSSD][1119164736](:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with leader 2, rac2, is smaller than cohort of 1 nodes led by node 1, rac1, based onmap type 2 [CSSD][1119164736]################################### [CSSD][1119164736]clssscExit: CSSD aborting from thread clssnmRcfgMgrThread[CSSD][1119164736]###################################

Page 15: An introduction to_rac_system_test_planning_methods

Test 8: Service Failover

Procedure� Create a Service

ajithpathiyil2:/home/oracle[RAC1]$ srvctl add service -d RAC -s svctest -r RAC1 -a RAC2 -P BASIC

ajithpathiyil2:/home/oracle[RAC1]$ srvctl start service -d RAC -s svctest

ajithpathiyil2:/home/oracle[RAC1]$ srvctl status service -d RAC -s svctest

Service svctest is running on instance(s) RAC1

ajithpathiyil2:/home/oracle[RAC1]$

Warning !

� You should never directly change the SERVICE_NAMES init parameter on a RAC database!! This parameter is maintained automatically by the clusterware.

SQL> show user

USER is "SYS"

SQL> select instance_name from v$instance;

INSTANCE_NAME

----------------

RAC1

SQL> shutdown abort;

ORACLE instance shut down.

SQL>

Page 16: An introduction to_rac_system_test_planning_methods

Procedure

� Unplug all network cables for the public network

NOTE: It is recommended NOT to use ifconfig to down the interface, this may lead to the address still being plumbed to the interface resulting in unexpected results.

Expected Results

•Check with “crsctl stat res –t”

� The ora.*.network and listener resources will go offline for the node. � SCAN VIPs and SCAN LISTENERs running on the node will fail over to a surviving node.

ajithpathiyil2:/home/oracle[grid]$ srvctl status scanSCAN VIP scan1 is enabledSCAN VIP scan1 is running on node ajithpathiyil2ajithpathiyil2:/home/oracle[grid]$

ajithpathiyil2:/home/oracle[grid]$ srvctl status scan_listenerSCAN Listener LISTENER_SCAN1 is enabledSCAN listener LISTENER_SCAN1 is running on node ajithpathiyil2ajithpathiyil2:/home/oracle[grid]$

Test 9: Public Network Failure

Page 17: An introduction to_rac_system_test_planning_methods

� The VIP for the node will fail over to a surviving node.

� The database instance will remain up but will be unregistered with the remote listeners.

� Database services will fail over to one of the other available nodes.

� If TAF is configured, clients should fail over to an available instance.

NODE VERSION=1.0 host=ajithpathiyil2 incarn=0 status=nodedown reason=public_nw_down timestamp=30-Aug-2009 01:56:12 reported=Sun Jan 30 01:56:13 CDT 2013

NODE VERSION=1.0 host=ajithpahtiyil2 incarn=147028525 status=nodedown reason=member_leave timestamp=30-Aug-2009 01:57:19 reported=Sun Aug 30 01:57:20 CDT 2013

Measures� Time to detect the network failure and relocate resources.

Test 9: Public Network Failure

Page 18: An introduction to_rac_system_test_planning_methods

Procedure

� Unplug all network cables for the interconnect network

NOTE: It is recommended NOT to use ifconfig to down the interface, this may lead to the address still being plumbed to the interface resulting in unexpected results.

Expected ResultsFor 11.2.0.2 and above:

� CSSD will detect split-brain situation and perform one of the following:

� o In a two-node cluster the node with the lowest node number will survive. � o In a multiple node cluster the largest sub-cluster will survive.

� On the node(s) that is being evicted, a graceful shutdown of Oracle Clusterware will be attempted. o All I/O capable client processes will be terminated and all resources will be cleaned up. If process termination and/or resource cleanup does not complete successfully the node will be rebooted.

� o Assuming that the above has completed successfully, OHASD will attempt to restart the stack. In this case the stack will be restarted once the network connectivity of the private interconnect network has been restored.

� Review the following logs: o $GI_HOME/log/<nodename>/alert<nodename>.log o $GI_HOME/log/<nodename>/cssd/ocssd.log

Test 10: Interconnect Network Failure

Page 19: An introduction to_rac_system_test_planning_methods

Measures

� For 11.2.0.2 and above:

� Oracle Clusterware will gracefully shutdown, should graceful shutdown fail (due to I/O processes not being terminated or resource cleanup) the node will be rebooted.

� Assuming that the graceful shutdown of Oracle Clusterware succeeded, OHASD will restart the stack once network connectivity for the private interconnect has been restored.

Test 10: Interconnect Network Failure

Page 20: An introduction to_rac_system_test_planning_methods

Sample Cluster Callout Script

#!/bin/ksh

# # Author: Ajith Narayanan

## http://oracledbascriptsfromajith.blogspot.com

## Version 1.0

## This callout script is extended to report/mail the affected weblogic services when any Oracle cluster event occurs.

##

umask 022

FAN_LOGFILE=$ORACLE_HOME/racg/usrco/`hostname`_uptime.log

EVENTLINE=$ORACLE_HOME/racg/usrco/`hostname`_eventline.log

EVENTLINE_MID=$ORACLE_HOME/racg/usrco/`hostname`_eventline_mid.log

MAIL_CONT=$ORACLE_HOME/racg/usrco/`hostname`_mail.log

WEBLOGIC_DS=$ORACLE_HOME/racg/usrco/weblogic_ds

echo $* "reported="`date` >> $FAN_LOGFILE &

tail -1 $FAN_LOGFILE > $EVENTLINE

awk '{

for (f = 1; f <= NF; f++) { a[NR, f] = $f } } NF > nf { nf = NF } END {

for (f = 1; f <= nf; f++) { for (r = 1; r <= NR; r++) { printf a[r, f] (r==NR ? RS : FS) } }

}' $EVENTLINE > $EVENTLINE_MID

SER=`grep "service=" $EVENTLINE_MID|awk -F= '{print $2}'`

DB=`grep "database=" $EVENTLINE_MID|awk -F= '{print $2}'`

Page 21: An introduction to_rac_system_test_planning_methods

INST=`grep "instance=" $EVENTLINE_MID|awk -F= '{print $2}'`

HOST=`grep "host=" $EVENTLINE_MID|awk -F= '{print $2}'`

STAT=`grep "status=" $EVENTLINE_MID|awk -F= '{print $2}'`

if [ "$SER" != " " | "$DB" != " " | "$INST" != " " | "$HOST" != " " | "$STAT" != " " ]; then

if [ $STAT = nodedown ]; then

cat $EVENTLINE_MID > $MAIL_CONT

echo "**============================SERVICES AFFECTED===============================**" >> $MAIL_CONT

grep -i "$DB_" $WEBLOGIC_DS >> $MAIL_CONT

elif [ $STAT = up ]; then

cat $EVENTLINE_MID > $MAIL_CONT

echo "**============================SERVICES RESTORED===============================**" >> $MAIL_CONT

grep -i "$DB_" $WEBLOGIC_DS|grep "SERVICE_NAME=$SER" >> $MAIL_CONT

else

cat $EVENTLINE_MID > $MAIL_CONT

echo "**============================SERVICES AFFECTED===============================**" >> $MAIL_CONT

grep -i "$DB_" $WEBLOGIC_DS|grep "SERVICE_NAME=$SER" >> $MAIL_CONT

#fi

cat $MAIL_CONT| /bin/mail -s "Cluster $STAT event: $DB $INST $SER $HOST" [email protected]

fi

#cat $MAIL_CONT| /bin/mail -s "Cluster $STAT event: $DB $INST $SER $HOST" [email protected]

fi

rm $EVENTLINE $EVENTLINE_MID $MAIL_CONT

Sample Cluster Callout Script

Page 22: An introduction to_rac_system_test_planning_methods

Q & A

Page 23: An introduction to_rac_system_test_planning_methods

Thank You For Attending AIOUG Tech Day

Be A Part Of AIOUG For Sharing & Gaining Knowledge