Upload
veeranji-velpula
View
125
Download
6
Tags:
Embed Size (px)
DESCRIPTION
rac question and answers
Citation preview
http://dbaregistry.blogspot.in/search/label/*%20Home%20*
Welcome to Oracle DBA Forum Home
Server Administration, Performance Tuning
Oracle Wait EventsLatches and Latch ContentionHow To find The Row Which is Locked by a SessionSQL : Useful scripts for DBA
Oracle Real Application Clusters (RAC)
Oracle RAC Interview questionsRAC Load Balancing, TAF , FANWhen during the installation process are clusterware components createdSteps to move OCR and Voting disksteps to Restore RAC Database to Single InstanceStep to migrate crs resources to new server.
Backup & Recovery, RMAN
Steps to recover Primary's Datafile from StandbySteps to recover Standby after missing archivelogbackup-based duplication in 11GR2
Oracle 11g new features
11g New Features in Dataguard11g New Features in RMAN11g New Features - Invisible Indexes11g New Features for AMM11g New Features in datapump11g New Features Read only tables
Data Guard
Dataguard failover test with flashback
Server Utilities ..exp/imp , expdp , impdp, SQL*Loader ...
How to change the table name during import?
Oracle Wait Events
Before looking into wait events, let us understand various state of user process.
Oracle user process is typically in one of the three states:
a. Idle wait. e.g. 'SQL*Net message from client'
b. Running code - Either on CPU or on a run queue. Oracle itself does not know if it is on-
CPU or just on a run queue.
c. Waiting
i. for some resource to become available. e.g. enqueue (lock) or a latch
ii. for an activity to complete that it has requested. Like an IO read request.
Oracle has a set of 'Wait Events' for activities in 'a' and 'c', and record CPU utilization for
'b'.
This is best illustrated with a simplified example of few seconds in the life of an Oracle
shadow process:
State Notes...
~~~~~ ~~~~~~~~
IDLE : Waiting for 'SQL*Net message from client'. Receives a SQL*Net packet requesting
'parse/execute' of a statement
ON CPU : decodes the SQL*Net packet.
WAITING : Waits for 'latch free' to obtain the a 'library cache' latch Gets the latch.
ON CPU : Scans for the SQL statement in the shared pool, finds a match, frees latch ,
sets up links to the shared cursor etc.. & begins to execute.
WAITING : Waits for 'db file sequential read' as we need a block which is not in the
buffer cache. Ie: Waiting for an IO to complete.
ON CPU : Block read has completed so execution can continue. Constructs a SQL*Net
packet to send back to the user containing the first row of data.
WAITING : Waits on 'SQL*Net message to client' for an acknowledgement that the
SQL*Net packet was reliably delivered.
IDLE : Waits on 'SQL*Net message from client' for the next thing to do.
Most common wait events are..
Buffer Busy waits/Cache Buffers Chains Latch waits
•This wait happens when a session wants to access a database block in the buffer cache
but it cannot as the buffer is "busy". The two main cases where this can occur are:
1.Another session is reading the block into the buffer
2.Another session holds the buffer in an incompatible mode to our request
Cache Buffers Chains Latch waits are caused by contention where multiple sessions
waiting to read the same block.
Typical solutions are:-
Look at the execution plan for the SQL being run and try to reduce the gets per
executions which will minimise the number of blocks being accessed and therefore
reduce the chances of multiple sessions contending for the same block.
Increase the PCTFREE for the table storage parameter. This will result in less rows per
block.
Consider implementing reverse key indexes. (if range scans aren't commonly used
against the segment)
In v$session_wait, the P1, P2, and P3 columns identify the file number, block number,
and buffer busy reason codes, respectively.
"Read By Other Session" wait event.
When user sessions request for data, Oracle will first read the data from disk into the
database buffer cache. If two or more sessions request the same data, the first session
will read the data into the buffer cache while other sessions wait. In previous versions,
this wait was classified under the "buffer busy waits" event. However, in Oracle 10g and
higher, this wait time is now broken out into the "read by other session" wait event.
Excessive waits for this event are typically due to several processes repeatedly reading
the same blocks, e.g. many sessions scanning the same index or performing full table
scans on the same table. Tuning this issue is a matter of finding and eliminating this
contention.
When a session is waiting on this event, an entry will be seen in the v$session_wait
system view giving more information on the blocks being waited for:
SELECT p1 "file#", p2 "block#"
FROM v$session_wait WHERE event = 'read by other session';
If information collected from the above query repeatedly shows that the same block (or
range of blocks) is experiencing waits, this indicates a "hot" block or object.
The following query will give the name and type of the object:
SELECT owner, segment_name, segment_type
FROM dba_extents WHERE file_id = &file
AND &block BETWEEN block_id AND block_id + blocks - 1
Log File Sync waits
Log file sync waits occur when sessions wait for redo data to be written to disk.
Typically this is caused by slow writes or committing too frequently in the application.
db file sequential read
Wait for an I/O read request to complete. A sequential read is usually a single-block
read. This differs from "db file scattered read" in that a sequential read reads data into
contiguous memory (whilst a scattered read reads multiple blocks and scatters them
into different buffers in the SGA).
db file scattered read
This wait happens when a session is waiting for a multiblock IO to complete. This
typically occurs during full table scans or index fast full scans. Oracle reads up to
DB_FILE_MULTIBLOCK_READ_COUNT consecutive blocks at a time and scatters them
into buffers in the buffer cache.
direct path read
Direct path reads are generally used by Oracle when reading directly into PGA memory
(as opposed to into the buffer cache).
This style of read request is typically used for:
Sort I/Os when memory Sort areas are exhausted and temporary tablespaces are used
to perform the sort
Parallel Query slaves.
direct path write
This wait is seen for:
Direct load operations (eg: Create Table as Select (CTAS) may use this)
Parallel DML operations
Sort IO (when a sort does not fit in memory)
db file parallel write
DBW waits on "db file parallel write" when waiting for a parallel write to files and blocks
to complete.
The db file parallel write occurs when the process, typically DBWR, has issued multiple
I/O requests in parallel to write dirty blocks from the buffer cache to disk, and is waiting
for all requests to complete.
--------------------------------------------------
V$Session_wait_history
From Oracle Database 10g a new view V$Session_wait_history will allow us to see the
last few wait events a session waited on.
The last 10 wait events that a session experienced can be displayed using the
v$session_wait_history view. The session has to be currently active. Once the session
ends this information is not available.
We can use the seq# column to sort the wait events into the order in which the wait
events occurred for the session.
Labels: Interview Questions : Wait Events
Latches and Latch Contention
What is a latch?
Latches are low level serialization mechanisms used to protect shared data structures in
the SGA. A process acquires a latch when working with a structure in the SGA. It
continues to hold the latch for the period of time it works with the structure
A latch is a type of a lock that can be very quickly acquired and freed.
Latches are typically used to prevent more than one process from executing the same
piece of code at a given time.
List of latches that are of most concern to a DBA
BUFFER CACHE LATCHES:
There are two main latches which protect data blocks in the buffer cache.
- Cache buffers chains latch :- This latch is acquired whenever a block in the buffer
cache is accessed.
Cache Buffers Chains Latch waits are caused by contention where multiple sessions
waiting to read the same block.
Reduce logical I/O rates by tuning and minimizing the I/O requirements of the SQL
involved.
Identify and reduce the contention for hot blocks.
- Cache buffers LRU chain latch:- This latch is acquired in order to introduce a new
block into the buffer cache and when writing a buffer back to disk.
Reduce contention for this by increasing the size of the buffer cache
REDOLOG BUFFER LATCHES:
- Redo allocation latch:- The redo allocation latch is acquired in order to allocate
space within the log buffer.
Increase the size of the LOG_BUFFER
Reduce the load of the log buffer using NOLOGGING features when possible.
- Redo copy latch:-This latch is used to write redo records into the redolog buffer.
Contention can be reduced by increasing the value of LOG_SIMULTANEOUS_COPIES in
multi-cpu system.
LIBRARY CACHE:
- Library cache latch:- The library cache latch must be acquired in order to add a new
statement to the library cache. -
Ensure that the application is reusing as much as possible SQL statement.
If the application is already tuned, increase the SHARED_POOL_SIZE.
- Library cache pin latch:- This latch is acquired when a statement in the library
cache is reexecuted.
SHARED POOL RELATED LATCHES
- Shared pool latch: While the library cache latch protects operations withing the
library cache, the shared pool latch is used to protect critical operations when allocating
and freeing memory in the shared pool.
Ways to reduce the shared pool latch are, avoid hard parses when possible.
Row cache objects latch:- This latch comes into play when user processes are
attempting to access the cached data dictionary values.
Reduce contention for this latch is by increasing the size of the shared pool
(SHARED_POOL_SIZE)
How To find The Row Which is Locked by a Session
Which row is locked in a certain table can only be queried when other session is waiting
for the row involved.
Check below link for the sql to find.
How To Identify The Row Which is Locked by a Session
Useful SQL Scripts
Useful SQL Scripts for day-to-day DBA tasks.
SQL to find the locked object and session that locked the object.
Run below script when user complaints that update statement hangs...
column LOCKED_OBJECT format A35 wrapped
column ORACLE_USERNAME format A17 wrapped
column OS_USER_NAME format A12 wrapped
column SESSION_ID format 999999 wrapped
column SINCE format A21
select o.OWNER || '.' || o.OBJECT_NAME as LOCKED_OBJECT,
lo.ORACLE_USERNAME,
lo.OS_USER_NAME,
lo.SESSION_ID,
lo.PROCESS,
case lo.LOCKED_MODE
when 0 then 'none'
when 1 then 'null (NULL)'
when 2 then 'row-S (SS)'
when 3 then 'row-X (SX)'
when 4 then 'share (S)'
when 5 then 'S/Row-X (SSX)'
when 6 then 'exclusive (X)'
end as LOCKED_MODE,
cast(sysdate-(CTIME/(24*60*60)) as timestamp(0)) as SINCE
from DBA_OBJECTS o,
V$LOCKED_OBJECT lo,
V$LOCK l
where o.OBJECT_ID = lo.OBJECT_ID
and lo.OBJECT_ID = l.ID1 and lo.SESSION_ID = l.SID
order by LOCKED_OBJECT, ORACLE_USERNAME, OS_USER_NAME, SESSION_ID;
------------------------------------------------------------------
When a session is waiting for lock, how to find the blocking session and
waiting for object and row ...
To find who is blocking ?
select sb.username || '@' || sb.machine
|| ' ( SID=' || sb.sid || ' ) is blocking '
|| sw.username || '@' || sw.machine || ' ( SID=' || sw.sid || ' ) ' AS blocking_status
from v$lock lb, v$session sb, v$lock lw, v$session sw
where sb.sid=lb.sid and sw.sid=lw.sid
and lb.BLOCK=1 and lw.request > 0
and lb.id1 = lw.id1
and lw.id2 = lw.id2 ;
Waiting for object and rowid ...
select o.object_name, row_wait_obj#, row_wait_file#, row_wait_block#,
row_wait_row#,
dbms_rowid.rowid_create ( 1, o.DATA_OBJECT_ID, ROW_WAIT_FILE#,
ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects o where sid=&waiting_sid and s.ROW_WAIT_OBJ# =
o.OBJECT_ID ;
Waiting for row..
select * from table_name_from_above where rowid =&rowid_returned
------------------------------------------------------------
To check dataguard
When you get alert that standby database is out of sync..
Run this at Primary
set pages 1000
set lines 120
column DEST_NAME format a20
column DESTINATION format a35
column ARCHIVER format a10
column TARGET format a15
column status format a10
column error format a15
select DEST_ID,DEST_NAME,DESTINATION,TARGET,STATUS,ERROR from v$archive_dest
where DESTINATION is NOT NULL
/
select ads.dest_id,max(sequence#) "Current Sequence",
max(log_sequence) "Last Archived"
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id and al.dest_id=ads.dest_id group by ads.dest_id;
Run this at Standby
select max(al.sequence#) "Last Seq Recieved" from v$archived_log al
/
select max(al.sequence#) "Last Seq Apllied" from v$archived_log al
where applied ='YES'
/
select process,status,sequence# from v$managed_standby
/
Note: This script won't work for RAC
-----------------------------------------------------------------------------------
To get hidden parameters values
-- must be run from SYS.
SELECT
a.ksppinm "Parameter",
a.ksppdesc "Description",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value"
FROM
sys.x$ksppi a,
sys.x$ksppcv b,
sys.x$ksppsv c
WHERE
a.indx = b.indx
AND
a.indx = c.indx
AND
a.ksppinm LIKE '/_%' escape '/'
/
-------------------------------------------------------------------
To generate user creation sql.
This may be useful when you do schema refresh with expdp/impdp
set pagesize 0
set heading off
set long 999999999
set feedback off
set echo off
SELECT DBMS_METADATA.GET_DDL('USER', '&1') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','&1') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL( 'ROLE_GRANT','&1') FROM DUAL;
SELECT 'GRANT ' || PRIVILEGE || ' ON ' || OWNER || '.' || TABLE_NAME || ' TO ' || GRANTEE
|| ';' FROM DBA_TAB_PRIVS WHERE GRANTEE = '&1';
SELECT 'ALTER USER '||USERNAME||' QUOTA UNLIMITED ON '||TABLESPACE_NAME||';'
FROM DBA_TS_QUOTAS WHERE USERNAME='&1';
-------------------------------------------------------------------
Check for long running Transactions:
prompt Current transactions open for more than 20 minutes
prompt
col runlength HEAD 'Txn Open Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'TransactionID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial# sid,xidusn||'.'||xidslot||'.'||xidsqn xid, (sysdate - start_date
)* 1440
runlength ,terminal,program from gv$transaction t, gv$session s where t.addr=s.taddr
and (sysdate - start_date) * 1440 > 20;
--------------------------------------------------------------------------
Labels: SQL : Useful scripts for DBA
1 comment:
1.
AnonymousFebruary 17, 2010 11:20 PM
To shows top PGA user.
select pid,spid,substr(username,1,20) "USER" ,program,PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM from v$process where pga_alloc_mem=(select max(pga_alloc_mem) from v$process where program not like '%LGWR%');
Oracle RAC Interview questions and answers
What are Oracle Clusterware processes for 10g on Unix and Linux
Cluster Synchronization Services (ocssd) — Manages cluster node membership and runs
as the oracle user; failure of this process results in cluster restart.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which
could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an
application process, and so on) based on the resource's configuration information that is
stored in the OCR. This includes start, stop, monitor and failover operations. This
process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs
creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O
fencing. OPROCD performs its check, stops running, and if the wake up is beyond the
expected time, then OPROCD resets the processor and reboots the node. An OPROCD
failure results in Oracle Clusterware restarting the node. OPROCD uses the hangcheck
timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific
requirements and complex resources. Runs server callout scripts when FAN events
occur.
What are Oracle database background processes specific to RAC
•LMS—Global Cache Service Process
•LMD—Global Enqueue Service Daemon
•LMON—Global Enqueue Service Monitor
•LCK0—Instance Enqueue Process
To ensure that each Oracle RAC database instance obtains the block that it needs to
satisfy a query or transaction, Oracle RAC instances use two processes, the Global
Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain
records of the statuses of each data file and each cached block using a Global Resource
Directory (GRD). The GRD contents are distributed across all of the active instances.
What are Oracle Clusterware Components
Voting Disk — Oracle RAC uses the voting disk to manage cluster membership by way of
a health check and arbitrates cluster ownership among the instances in case of network
failures. The voting disk must reside on shared disk.
Oracle Cluster Registry (OCR) — Maintains cluster configuration information as well as
configuration information about any cluster database within the cluster. The OCR must
reside on shared disk that is accessible by all of the nodes in your cluster
How do you troubleshoot node reboot
Please check metalink ...
Note 265769.1 Troubleshooting CRS Reboots
Note.559365.1 Using Diagwait as a diagnostic to get more information for diagnosing
Oracle Clusterware Node evictions.
How do you backup the OCR
There is an automatic backup mechanism for OCR. The default location is :
$ORA_CRS_HOME\cdata\"clustername"\
To display backups :
#ocrconfig -showbackup
To restore a backup :
#ocrconfig -restore
With Oracle RAC 10g Release 2 or later, you can also use the export command:
#ocrconfig -export -s online, and use -import option to restore the contents back.
With Oracle RAC 11g Release 1, you can do a manaual backup of the OCR with the
command:
# ocrconfig -manualbackup
How do you backup voting disk
#dd if=voting_disk_name of=backup_file_name
How do I identify the voting disk location
#crsctl query css votedisk
How do I identify the OCR file location
check /var/opt/oracle/ocr.loc or /etc/ocr.loc ( depends upon platform)
or
#ocrcheck
Is ssh required for normal Oracle RAC operation ?
"ssh" are not required for normal Oracle RAC operation. However "ssh" should be
enabled for Oracle RAC and patchset installation.
What is SCAN?
Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g
Release 2 feature that provides a single name for clients to access an Oracle Database
running in a cluster. The benefit is clients using SCAN do not need to change if you add
or remove nodes in the cluster.
Click here for more details from Oracle
What is the purpose of Private Interconnect ?
Clusterware uses the private interconnect for cluster synchronization (network
heartbeat) and daemon communication between the the clustered nodes. This
communication is based on the TCP protocol.
RAC uses the interconnect for cache fusion (UDP) and inter-process communication
(TCP). Cache Fusion is the remote memory mapping of Oracle buffers, shared between
the caches of participating nodes in the cluster.
Why do we have a Virtual IP (VIP) in Oracle RAC?
Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP
timeout period (which can be up to 10 min) before getting an error. As a result, you
don't really have a good HA solution without using VIPs.
When a node fails, the VIP associated with it is automatically failed over to some other
node and new node re-arps the world indicating a new MAC address for the IP.
Subsequent packets sent to the VIP go to the new node, which will send error RST
packets back to the clients. This results in the clients getting errors immediately.
What do you do if you see GC CR BLOCK LOST in top 5 Timed Events in AWR
Report?
This is most likely due to a fault in interconnect network.
Check netstat -s
if you see "fragments dropped" or "packet reassemblies failed" , Work with your system
administrator find the fault with network.
How many nodes are supported in a RAC Database?
10g Release 2, support 100 nodes in a cluster using Oracle Clusterware, and 100
instances in a RAC database.
Srvctl cannot start instance, I get the following error PRKP-1001 CRS-0215,
however sqlplus can start it on both nodes? How do you identify the problem?
Set the environmental variable SRVM_TRACE to true.. And start the instance with srvctl.
Now you will get detailed error stack.
what is the purpose of the ONS daemon?
The Oracle Notification Service (ONS) daemon is an daemon started by the CRS
clusterware as part of the nodeapps. There is one ons daemon started per clustered
node.
The Oracle Notification Service daemon receive a subset of published clusterware
events via the local evmd and racgimon clusterware daemons and forward those events
to application subscribers and to the local listeners.
This in order to facilitate:
a. the FAN or Fast Application Notification feature or allowing applications to respond to
database state changes.
b. the 10gR2 Load Balancing Advisory, the feature that permit load balancing accross
different rac nodes dependent of the load on the different nodes. The rdbms MMON is
creating an advisory for distribution of work every 30seconds and forward it via
racgimon and ONS to listeners and applications.
Labels: Oracle RAC Interview questions
19 comments:
1.
oracleandffun December 6, 2011 at 3:19 AM
how many ethernet card and how many IPs in RAC ?What is the master daemon in RAC ?
Reply
Replies
1.
sudhirMay 6, 2012 at 2:08 PM
2 NIC card and 3 IPs for each node in RAC for 10g. for 11g 3 extra IPs for SCAN
Reply
2.
AnonymousDecember 29, 2011 at 11:09 PM
When did a node becomes as "MASTER NODE" ?
Reply
Replies
1.
sudhirMay 6, 2012 at 2:03 PM
The node with the lowest node number will become master node and dynamic remastering of the resources will take place.
To find out the master node for particular resource, you can query v$ges_resource for MASTER_NODE column.
To find out which is the master node, you can see ocssd.log file and search for "master node number".
2.
sudhirMay 6, 2012 at 2:05 PM
when the first master node fails in the cluster the lowest node number will become master node
Reply
3.
AnonymousDecember 29, 2011 at 11:11 PM
what is dynamic remastering ?When will the dynamic remastering happens?
Reply
Replies
1.
sudhirMay 6, 2012 at 1:58 PM
dynamic remastering is ability to move the ownership of resource from one instance to another instance in RAC. dynamic resource remastering is used to implement for resource affinity for increased performance. resource affinity optimized the system in situation where update transactions are being executed in one instance. when activity shift to another instance the resource affinity correspondingly move to another instance. If activity is not localized then resource ownership is hashed to the instance.
In 10g dynamic remastering happens in file+object level.the process of remastering is very stringent.for one instance should touch more than 50 times than the other instance in particular period(say 10 mints). this touch ratio and time can be tuned by gc_affinity_limit and _gc_affinity_time parameter.
Reply
4.
Rahman January 3, 2012 at 6:43 AM
good questions and post in depth
Reply
5.
rameshJanuary 17, 2012 at 7:44 PM
why we maintaning odd number of voting disks?
Reply
Replies
1.
AnonymousMay 6, 2012 at 1:40 PM
In oracle RAC A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.
as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is
1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
2.
Krishan Jaglan June 1, 2012 at 1:00 AM
Odd number of disk are to avoid split brain, When Nodes in cluster can't talk to each other they run to lock the Voting disk and whoever lock the more disk will survive, if disk number are even there are chances that node might lock 50% of disk (2 out of 4) then how to decide which node to evict. whereas when number is odd, one will be higher than other and each for cluster to evict the node with less number.Thankskrishan
3.
Amzad Khan August 20, 2012 at 10:31 PM
thank you sir
Reply
6.
Tanveer January 23, 2012 at 8:38 AM
good questions
Reply
7.
Rajkumar February 4, 2012 at 6:05 PM
A node must be able to access more than half of the voting disks at any timeFor example, if you have three voting disks configured, then a node must be able toaccess at least two of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.
Reply
8.
Database DBA February 21, 2012 at 12:04 PM
Excellent sharing. Kindly provide us more questions and answers specially for RMAN with RAC.
Reply
9.
AnonymousJune 3, 2012 at 9:51 AM
Very Good blog and All Question--answers...
Q.1 Can You explain a bit about checkpoint and local & Remote listener ?
Q.2 regarding dbms scheduler jobs in RAC DB, i have observed that all scheduled jobs will run from one instance only. if manually run from other instance then it will run from that instance. does instance_stickness parameter has to do anything in this issue ?
Reply
Replies
1.
Sunil June 26, 2012 at 7:33 AM
In RAC Database, we usually see both Local and Remote Listeners. Remote listener will be scan listener wherein it acts as a load balancer.
Reply
10.
AnonymousJune 10, 2012 at 5:11 AM
Its really a one and only blog which talks about RAC FAQ in the entire web.
The article "When exactly during the installation process are clusterware components created?"is really very good. But its for 10g. If you update the blog with the 11gR2 would be really great
Reply
11.
AnonymousJuly 2, 2012 at 1:48 PM
ANY IDEA ON LOCK Monitoring in RAC? and Can some one explain?
RAC Load Balancing, TAF , FAN
Client Side Connect-Time Load Balance
The client load balancing feature enables clients to randomize connection requests
among the listeners.
This is done by client Tnsnames Parameter: LOAD_BALANCE.
The (load_balance=yes) instructs SQLNet to progress through the list of listener
addresses in the address_list section of the net service name in a random sequence.
When set to OFF, instructs SQLNet to try the addresses sequentially until one succeeds.
Client Side Connect-Time failover
This is done by client Tnsnames Parameter: FAILOVER
The (failover=on) enables clients to connect to another listener if the initial connection
to the first listener fails. Without connect-time failover, Oracle Net attempts a
connection with only one listener.
Server Side Listener Connection Load Balancing.
With server-side load balancing, the listener directs a connection request to the best
instance currently providing the service.
Init parameter remote_listener should be set. When set, each instance registers with the
TNS listeners running on all nodes within the cluster.
There are two types of server-side load balancing:
Load Based — Server side load balancing redirects connections by default depending on
node load. This id default.
Session Based — Session based load balancing takes into account the number of
sessions connected to each node and then distributes the connections to balance the
number of sessions across the different nodes.
From 10g release 2 the service can be setup to use load balancing advisory. This mean
connections can be routed using SERVICE TIME and THROUGHPUT. Connection load
balancing means the goal of a service can be changed, to reflect the type of
connections using the service.
Transparent Application Failover (TAF)
Transparent Application Failover (TAF) is a feature of the Oracle Call Interface (OCI)
driver at client side. It enables the application to automatically reconnect to a database,
if the database instance to which the connection is made fails. In this case, the active
transactions roll back.
Tnsnames Parameter: FAILOVER_MODE
e.g (failover_mode=(type=select)(method=basic))
Failover Mode Type can be Either SESSION or SELECT.
Session failover will have just the session to failed over to the next available node. With
SELECT, the select query will be resumed.
TAF can be configured with just server side service settings by using dbms_service
package.
Fast Connection Failover (FCF)
Fast Connection Failover is a feature of Oracle clients that have integrated with FAN HA
Events.
Oracle JDBC Implicit Connection Cache, Oracle Call Interface (OCI), and Oracle Data
Provider for .Net (ODP.Net) include fast connection failover.
With fast connection failover, when a down event is received, cached connections
affected by the down event are immediately marked invalid and cleaned up.
Oracle Clusterware: Components installed
When exactly during the installation process are clusterware components
created?
After fulfilling the pre-installation requirements, the basic installation steps to follow
are:
1. Invoke the Oracle Universal Installer (OUI)
2. Enter the different information for some components like:
- name of the cluster
- public and private node names
- location for OCR and Voting Disks
- network interfaces used for RAC instances
-etc.
3. After the Summary screen, OUI will start copying under the $CRS_HOME (this is the
$ORACLE_HOME for Oracle Clusterware) in the local node the libraries and executables.
- here we will have the daemons and scripts init.* created and configured properly.
Oracle Clusterware is formed of several daemons, each one of which have a special
function inside the stack. Daemons are executed via the init.* scripts (init.cssd, init.crsd
and init.evmd).
- note that for CRS only some client libraries are recreated, but not all the executables
(as for the RDBMS).
4. Later the software is propagated to the rest of the nodes in the cluster and the
oraInventory is updated.
5. The installer will ask to execute root.sh on each node. Until this step the software for
Oracle Clusterware is inside the $CRS_HOME.
Running root.sh will create several components outside the $CRS_HOME:
- OCR and VD will be formated.
- control files (or SCLS_SRC files ) will be created with the correct contents to start
Oracle Clusterware.
These files are used to control some aspects of Oracle Clusterware like:
- enable/disable processes from the CSSD family (Eg. oprocd, oslsvmon)
- stop the daemons (ocssd.bin, crsd.bin, etc).
- prevent Oracle Clusterware from being started when the machine boots.
- etc.
- /etc/inittab will be updated and the init process is notified.
In order to start the Oracle Clusterware daemons, the init.* scripts first need to be run.
These scripts are executed by the daemon init. To accomplish this some entries must be
created in the file /etc/inittab.
- the different processes init.* (init.cssd, init.crsd, etc) will start the daemons (ocssd.bin,
crsd.bin, etc). When all the daemons are running then we can say that the installation
was successful
- On 10.2 and later, running root.sh on the last node in the cluster also will create the
nodeapps (VIP, GSD and ONS). On 10.1, VIPCA is executed as part of the RAC
installation.
6. After running root.sh on each node, we need to continue with the OUI session. After
pressing the 'OK' button OUI will include the information for the public and
cluster_interconnect interfaces. Also CVU (Cluster Verification Utility) will be executed
How to move OCR and Voting disk to new storage device?
Moving OCR
==========
You must be logged in as the root user, because root owns the OCR files. Also an
ocrmirror must be in place before trying to replace the OCR device.
Make sure there is a recent backup of the OCR file before making any changes:
ocrconfig –showbackup
If there is not a recent backup copy of the OCR file, an export can be taken for the
current OCR file. Use the following command to generate an export of the online OCR
file:
In 10.2
# ocrconfig –export -s online
In 11g
# ocrconfig -manualbackup
The new OCR disk must be owned by root, must be in the oinstall group, and must have
permissions set to 640. Provide at least 100 MB disk space for the OCR.
On one node as root run:
# ocrconfig -replace ocr
# ocrconfig -replace ocrmirror
Now run ocrcheck to verify if the OCR is pointing to the new file
Moving Voting Disk
==================
Note: crsctl votedisk commands must be run as root
Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making
any modification to the voting disk. Determine the current voting disk location using:
crsctl query css votedisk
Take a backup of all voting disk:
dd if=voting_disk_name of=backup_file_name
To move a Voting Disk, provide the full path including file name:
crsctl delete css votedisk –force
crsctl add css votedisk –force
After modifying the voting disk, start the Oracle Clusterware stack on all nodes
# crsctl start crs
Verify the voting disk location using
crsctl query css votedisk
Labels: RAC : Moving OCR and Voting disk
1 comment:
1.
AnonymousOctober 26, 2012 10:56 PM
this procedure does not work for 11g
ocrconfig -replace ocrocrconfig -replace ocrmirror
are valid only for 10g
11gocrconfig -replace oldfile -replacement newfileis the right syntax
furthermore
raw devices are not supported in 11.2 for ocr and voting disks unless you migrated from 10g.
11g supports ocr and vting disks on ASM only.
HowTo Restore RAC Database to Single Instance On Another Node
Take RMAN backup of the production RAC database..
RMAN> run{
allocate channel c1 type disk format '/tmp/%U';
backup database;
backup archivelog all;
}
- Create a PFILE for the single instance database using the production RAC parameter
file
And modify the parameters %dest, control_files, log_archive_dest_1, %convert,
cluster_database_instances, cluster_database etc.. for undo_tablespace, mention any
one undo tablespace name
- Copy the backup pieces and the modified INIT.ORA file to the new host to same mount
point.
- Use the pfile created above to STARTUP NOMOUNT the database on the new host
$ sqlplus "/ as sysdba"
SQL> startup nomount;
$ rman target / nocatalog
RMAN> restore controlfile from '/tmp/< backup piece name of controlfile auto
backup>';
RMAN> alter database mount;
- Determine the recovery point.
RMAN> list backup of archivelog all;
Check the last archive sequence for all redo threads and select the archive sequence
having LEAST "Next SCN" among them.
- Having determined the point upto which media recovery should run, start the
restore/recovery using:
RMAN> run {
set until sequence < sequence# from above> thread < thread# >;
restore database;
recover database;
}
SQL> alter database open resetlogs;
If open database fail with error ORA-38856
then, Set the following parameter in the init.ora file:
_no_recovery_through_resetlogs=TRUE
Then open with resetlogs.
Once the database has opened, removed this hidden parameter.
- Once the database is opened successfully, you may remove the redolog groups for
redo threads of other instances.
SQL> select THREAD#, STATUS, ENABLED
2 from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 CLOSED PRIVATE
SQL> select group# from v$log where THREAD#=2;
GROUP#
----------
4
5
6
SQL> alter database disable thread 2;
Database altered.
SQL> alter database clear unarchived logfile group 4; ( repeat for 4 to 6)
Database altered.
SQL> alter database drop logfile group 4; ( repeat for 4 to 6)
Database altered.
- Now you can remove the undo tablespaces of other instances.
SQL> show parameter undo;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> select tablespace_name from dba_tablespaces where contents='UNDO';
TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2
SQL> drop tablespace UNDOTBS2 including contents and datafiles;
Tablespace dropped.
How to export and import crs resources while migrating Oracle RAC to new server.
Below script generate svrctl add script for database, instance, service and 11G listeners
from OCR from current RAC.
Save the result of the script and run it at new RAC.
for DBNAME in $(srvctl config database)
do
# Generate DB resource
srvctl config database -d $DBNAME -a | awk -v dbname="$DBNAME" \
'BEGIN { FS=":" }
$1~/Oracle home/ || $1~/ORACLE_HOME/ {dbhome = "-o" $2}
$1~/Spfile/ || $1~/SPFILE/ {spfile = "-p" $2}
$1~/Disk Groups/ {dg = "-a" $2}
END { if (avail == "-a ") {avail = ""}; printf "%s %s %s %s %s\n", "srvctl add database -
d ", dbname, dbhome, spfile, dg }'
# Generate Instance resource
srvctl status database -d $DBNAME | awk -v dbname="$DBNAME" \
'$4~/running/ { printf "%s %s %s %s %s %s\n", "srvctl add instance -d ",dbname, " -i ",
$2 ," -n ", $7 }
$5~/running/ { printf "%s %s %s %s %s %s \n", "srvctl add instance -d ",dbname, " -i ",
$2 ," -n ", $8 }'
# Modify instance for 10G - ASM dependency
if [ $(echo $ORACLE_HOME | grep "1020" | wc -l ) -eq 1 ]
then
srvctl status database -d $DBNAME | awk -v dbname="$DBNAME" \
'$2~/1$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ,"
-s +ASM1" }
$2~/2$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -
s +ASM2" }
$2~/3$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -
s +ASM3" }
$2~/4$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -
s +ASM4" }'
fi
echo "srvctl start database -d $DBNAME"
# Generate Service resource
snamelist=$(srvctl status service -d $DBNAME | awk '{print $2}')
for sname in $snamelist
do
srvctl config service -d $DBNAME -s $sname| awk -v dbname="$DBNAME" -v
sname=$sname \
'BEGIN { FS=":"}
$1~/Preferred instances/ {pref = "-r" $2}
$1~/PREF/ {pref = "-r" $2; sub(/AVAIL/, "", pref) }
$1~/Available instances/ {avail = "-a" $2}
$2~/AVAIL/ {avail = "-a" $3}
$1~/Failover type/ {ft = "-e" $2}
$1~/Failover method/ {fm = "-m" $2}
$1~/Runtime Load Balancing Goal/ {g = "-B" $2}
END { if (avail == "-a ") {avail = ""}; printf "%s %s %s %s %s %s %s %s %s %s\n",
"srvctl add service -d ",dbname, "-s ", sname, pref, avail ,ft, fm,g, "-P BASIC"}'
echo "srvctl start service -d $DBNAME -s $sname"
done
done
# Listener at 11G Home. 10G listener can't ba added with srvctl.
srvctl config listener | awk \
'BEGIN { FS=":"; state = 0; }
$1~/Name/ {lname = "-l" $2; state=1};
$1~/Home/ && state == 1 {ohome = "-o" $2; state=2;}
$1~/End points/ && state == 2 {lport = "-p " $3; state=3;}
state == 3 {if (ohome != "-o ") {printf "%s %s %s %s\n", "srvctl add listener ", lname,
ohome, lport;} state=0;}'
Labels: How to export and import crs resources
2 comments:
1.
AnonymousDecember 8, 2011 12:58 AM
this script is not working
gettting error as follows
awk: syntax error near line 1awk: bailing out near line 1awk: syntax error near line 1awk: bailing out near line 1srvctl start database -d lrntestawk: syntax error near line 1awk: bailing out near line 1
Reply
2.
AnonymousDecember 8, 2011 2:53 AM
I did the following two things in solaris & it is started working
#!/usr/bin/kshAWK=/usr/xpg4/bin/awk
and i have replaced all awk with $AWK in the script....
Backup & Recovery, RMAN
Steps to recover the primary database's datafile using a copy of a standby database's datafile
Steps to recover the primary database's datafile using a copy of a standby
database's datafile.
This procedure will work for all file systems including raw or ASM.
Through this example we will be using datafile 12.
1) On standby database, copy datafile from ASM to a file system:
RMAN> backup as copy datafile 12 format '/tmp/df12.dbf';
2) transfer the datafile copy from the standby to the primary host using scp.
On primary database
3) Place the datafile to recover offline.
SQL> alter database datafile 12 offline;
4) catalog this datafile copy:
RMAN> catalog datafilecopy '/tmp/df12.dbf';
5) Confirm that datafile exists:
RMAN> list copy of datafile 12;
6) Restore the datafile:
RMAN> restore datafile 12;
7) Recover the datafile:
RMAN> recover datafile 12;
8) Place the datafile online:
SQL> alter database datafile 12 online;
Steps to recover the standby database's datafile using a copy of a primary
database's datafile.
1) Backup the primary database's datafile.
RMAN> backup as copy datafile 12 format '/tmp/df12.dbf';
2) transfer the file to the standby site using an operating system utility such as scp.
3) catalog the datafile copy on the standby site.
RMAN> catalog datafilecopy '/tmp/df12.dbf';
4) stop redo apply on the physical standby database.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
5) on the standby site restore the datafile copy.
RMAN> restore datafile 12;
6) restart redo apply on the physical standby database.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
How to recover Standby database from a missing archivelog
Recovering a Standby database from a missing archivelog
A Physical Standby database relies on continuous application of archivelogs from a
Primary Database to be in synch with it. In Oracle Database versions prior to 10g in the
event of an archivelog gone missing or corrupt you had to rebuild the standby database
from scratch.
In 10g you can use an incremental backup from SCN and recover the standby using
the same to compensate for the missing archivelogs as shown below
Step 1: On the standby database check the current scn.
STDBY> set numwidth 30;
STDBY> select current_scn from v$database;
CURRENT_SCN
-----------
123456789
Step 2: On the primary database create the needed incremental backup from the above
SCN
C:\Documents and Settings\frego>rman target /
RMAN> {
allocate channel c1 type disk;
BACKUP INCREMENTAL FROM SCN 123456789 DATABASE
FORMAT /tmp/incr_bkp/incr_bkp_%U';
}
Step 3: Cancel managed recovery at the standby database
STDBY>alter database recover managed standby database cancel;
Media recovery complete.
scp the backup files to standby server to /tmp/incr_bkp folder.
Step 4: Catalog the Incremental Backup Files at the Standby Database
/tmp/incr_bkp > rman target /
RMAN> CATALOG START WITH '/tmp/incr_bkp/';
searching for all files that match the pattern /tmp/incr_bkp/
List of Files Unknown to the Database
=====================================
......
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
Step 5: Apply the Incremental Backup to the Standby Database
RMAN> RECOVER DATABASE NOREDO;
Step 6: Put the standby database back to managed recovery mode.
STDBY>> recover managed standby database disconnect;
Media recovery complete.
From the alert.log you will notice that the standby database is still looking for the old
log files
*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence ....
**************************************************
This is because the controlfile has not been updated.
Hence the standby controlfile has to be recreated
Step 7: On the primary create new standby controlfile
PRIM>alter database create standby controlfile as ‘/tmp/incr_bkp/standby01.ctl’;
System altered.
Step 8: At Standby .. Replace standby controlfile at all location as shown by
controle_files parameter.
Copy the standby control file to the standby site. Shutdown the stanby database and
replace the stanby controlfiles and restart the standby database in managed recovery
mode...
Note: - FOR STANDBY DATABASES ON ASM additional steps is required after replacing
the stanby control file. Like renaming datafiles ...
RMAN DUPLICATE WITHOUT CONNECTING TO TARGET DATABASE IN 11GR2
Prior versions of Oracle required a connection to the TARGET and optional rman catalog
for duplicate database.
In 11GR2, we can perform RMAN duplicate database to a new server without connecting
to the target database or a recovery catalog.
Steps.
1) Take backup of the database.
RMAN> Configure controlfile autobackup on;
RMAN> backup database plus archivelog;
2) Copy the backupsets to auxilliary server. ( In this example to /orabackup directory)
3) Create an init.ora for duplicate database.
4) Start auxilliary instance
$ export ORACLE_SID=NEWDB
$ sqlplus "/ as sysdba"
SQL> startup nomount;
5) Start rman and connect to auxiliary instance
rman auxiliary /
RMAN> DUPLICATE DATABASE TO NEWDB BACKUP LOCATION '/orabackup/'
NOFILENAMECHECK
Labels: 11GR2 : backup-based duplication
1 comment:
1.
AnonymousJuly 10, 2012 11:13 PM
I have used the sme steps but endedup with RMAN-05579:CONTROLFILE backup not found....Eventhough I have shipped the backup of the contolfile to the specified location.
Oracle 11g new features
Oracle Database 11g: New Features in DataGuard
Active Data Guard
In Oracle Database 10g and below you could open the physical standby database for
read-only activities, but only after stopping the recovery process.
In Oracle 11g, you can query the physical standby database in real time while applying
the archived logs. This means standby continue to be in sync with primary but can use
the standby for reporting.
Let us see the steps now..
First, cancel the managed standby recovery:
SQL> alter database recover managed standby database cancel;
Database altered.
Then, open the database as read only:
SQL> alter database open read only;
Database altered.
While the standby database is open in read-only mode, you can resume the managed
recovery process.
SQL> alter database recover managed standby database disconnect;
Database altered.
Snapshot Standby
In Oracle Database 11g, physical standby database can be temporarily converted into
an updateable one called Snapshot Standby Database.
In that mode, you can make changes to database. Once the test is complete, you can
rollback the changes made for testing and convert the database into a standby
undergoing the normal recovery. This is accomplished by creating a restore point in the
database, using the Flashback database feature to flashback to that point and undo all
the changes.
Steps:
Configure the flash recovery area, if it is not already done.
SQL> alter system set db_recovery_file_dest_size = 2G;
System altered.
SQL> alter system set db_recovery_file_dest= '+FRADG';
System altered.
Stop the recovery.
SQL> alter database recover managed standby database cancel;
Database altered.
Convert this standby database to snapshot standby using command
SQL> alter database convert to snapshot standby;
Database altered.
Now recycle the database
SQL> shutdown immediate
...
SQL> startup
ORACLE instance started.
Database is now open for read/write operations
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
---------- ----------------
READ WRITE SNAPSHOT STANDBY
After your testing is completed, you would want to convert the snapshot standby
database back to a regular physical standby database by following the steps below
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate
SQL> startup mount
...
Database mounted.
SQL> alter database convert to physical standby;
Database altered.
Now shutdown, mount the database and start managed recovery.
SQL> shutdown
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
...
Database mounted.
Start the managed recovery process
SQL> alter database recover managed standby database disconnect;
Now the standby database is back in managed recovery mode. When the database was
in snapshot standby mode, the archived logs from primary were not applied to it. They
will be applied now.
In 10g this can be done by following steps .. DR failover test with flashback
Redo Compression
In Oracle Database 11g you can compress the redo that goes across to the standby
server via SQL*Net using a parameter compression set to true. This works only for the
logs shipped during the gap resolution. Here is the command you can use to enable
compression.
alter system set log_archive_dest_2 = 'service=STDBYDB LGWR ASYNC
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBYDB
compression=enable'
Oracle Database 11g: New Features in RMAN
Oracle Database 11g New Features in RMAN for DBA interview.
Advice on recovery
To find out failure...
RMAN> list failure;
To get the advice on recovery
RMAN> advise failure;
Recovery Advisor generates a script that can be used to repair the datafile or resolve
the issue. The script does all the work.
To verify what the script actually does ...
RMAN> repair failure preview;
Now execute the actual repair by issuing...
RMAN> repair failure;
Proactive Health Checks
In Oracle Database 11g, a new command in RMAN, VALIDATE DATABASE, can check
database blocks for physical corruption.
RMAN> validate database;
Parallel backup of the same datafile.
In 10g each datafile is backed by only one channel. In Oracle Database 11g RMAN, the
multiple channels can backup one datafiles parallel by breaking the datafile into chunks
known as "sections."
Optimized backup of undo tablespace.
In 10g, when the RMAN backup runs, it backs up all the data from the undo tablespace.
But during recovery, the undo data related to committed transactions are no longer
needed.
In Oracle Database 11g, RMAN bypasses backing up the committed undo data that is
not required in recovery. The uncommitted undo data that is important for recovery is
backed up as usual. This reduces the size and time of the backup.
Improved Block Media Recovery Performance
If flashback logs are present, RMAN will use these in preference to backups during block
media recovery (BMR), which can significantly improve BMR speed.
Block Change Tracking Support for Standby Databases
Block change tracking is now supported on physical standby databases, which in turn
means fast incremental backups are now possible on standby databases.
Faster Backup Compression
RMAN now supports the ZLIB binary compression algorithm as part of the Oracle
Advanced Compression option. The ZLIB algorithm is optimized for CPU efficiency, but
produces larger zip files than the BZIP2 algorithm available previously, which is
optimized for compression.
Archived Log Deletion Policy Enhancements
The archived log deletion policy of Oracle 11g has been extended to give greater
flexibility and protection in a Data Guard environment. The Oracle 10g and Oracle 11g
syntax is displayed below.
# Oracle 10g Syntax.
CONFIGURE ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON STANDBY |
NONE}}
# Oracle 11g Syntax.
ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON [ALL] STANDBY |BACKED UP
integer TIMES TO DEVICE TYPE deviceSpecifier |NONE | SHIPPED TO [ALL] STANDBY}
[ {APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE
deviceSpecifier |NONE | SHIPPED TO [ALL] STANDBY}]...}
The extended syntax allows for configurations where logs are eligible for deletion only
after being applied to, or transferred to, one or more standby database destinations.
Labels: 11g: New Features in RMAN
1 comment:
1.
Adam Gorge November 28, 2012 at 3:10 AM
One day my Oracle database get corrupted. I used RMAN Option to repair my corrupt Oracle database. But it got failed to repair my database. My friend has suggested me to try Oracle Database Recovery Software. The software is specially designed to repair corrupt Oracle Database & Files. I tried Stellar Phoenix Oracle Database Recovery Software. The software repairs database in few easy clicks. Thanks Stellar to get back my database!!
Oracle Database 11g: Invisible Indexes
Oracle 11g allows indexes to be marked as invisible.
Invisible indexes are maintained like any other index, but they are ignored by the
optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at
the instance or session level. Indexes can be created as invisible by using the INVISIBLE
keyword, and their visibility can be toggled using the ALTER INDEX command.
CREATE INDEX index_name ON table_name(column_name) INVISIBLE;
ALTER INDEX index_name INVISIBLE;
ALTER INDEX index_name VISIBLE;
Invisible indexes can be useful for processes with specific indexing needs, where the
presence of the indexes may adversely affect other functional areas. They are also
useful for testing the impact of dropping an index.
The current visibility status of an index is indicated by the VISIBILITY column of the
[DBA|ALL|USER]_INDEXES views.
Oracle Database 11g: New Features for Automatic Memory Management
Oracle simplified memory management over the last few versions of the database.
Oracle 9i automated PGA management by introducing PGA_AGGREGATE_TARGET
parameter. Oracle 10g continued this trend by automating SGA management using the
SGA_TARGET parameter. Oracle 11g takes this one step further by allowing you to
allocate one chunk of memory, which Oracle uses to dynamically manage both the SGA
and PGA.
Automatic memory management is configured using two new initialization parameters:
•MEMORY_TARGET: The amount of shared memory available for Oracle to use when
dynamically controlling the SGA and PGA. This parameter is dynamic, so the total
amount of memory available to Oracle can be increased or decreased, provided it does
not exceed the MEMORY_MAX_TARGET limit. The default value is "0".
•MEMORY_MAX_TARGET: This defines the maximum size the MEMORY_TARGET can
be increased to without an instance restart. If the MEMORY_MAX_TARGET is not
specified, it defaults to MEMORY_TARGET setting.
When using automatic memory management, the SGA_TARGET and
PGA_AGGREGATE_TARGET act as minimum size settings for their respective memory
areas. To allow Oracle to take full control of the memory management, these
parameters should be set to zero.
Oracle Database 11g: New Features in datapump
COMPRESSION parameter in expdp
One of the big issues with Data Pump was that the dumpfile couldn't be compressed
while getting created. In Oracle Database 11g, Data Pump can compress the dumpfiles
while creating them by using parameter COMPRESSION in the expdp command line. The
parameter has three options:
METDATA_ONLY - only the metadata is compressed
DATA_ONLY - only the data is compressed; the metadata is left alone.
ALL - both the metadata and data are compressed.
NONE - this is the default; no compression is performed.
Encryption
The dumpfile can be encrypted while getting created. The encryption uses the same
technology as TDE (Transparent Data Encryption) and uses the wallet to store the
master key. This encryption occurs on the entire dumpfile, not just on the encrypted
columns as it was in the case of Oracle Database 10g.
Data Masking
when you import data from production to QA, you may want to make sure sensitive data
are altered in such a way that they are not identifiable. Data Pump in Oracle Database
11g enables you do that by creating a masking function and then using that during
import.
REMAP_TABLE
Allows you to rename tables during an import operation.
Example
The following is an example of using the REMAP_TABLE parameter to rename the
employees table to a new name of emps:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=hr.employees
REMAP_TABLE=hr.employees:emps
Oracle Database 11g: Read only tables
Read-Only Tables in Oracle Database 11g
Oracle 11g allows tables to be marked as read-only using the ALTER TABLE command.
ALTER TABLE table_name READ ONLY;
ALTER TABLE table_name READ WRITE;
Any DML statements that affect the table data and SELECT ... FOR UPDATE queries
result in an ORA-12081 error message.
Data Guard
Data Guard failover with flashback
Steps for dataguard failover test with database flashback.
On Primary
Defer the log shipping to standby.
SQL> Alter system set log_archive_dest_state_n=defer;
Check last log sequence Select max(sequence#) from v$log_history
On Satndby
Enable flashback recovery.
Setup flashback destination and file size.
SQL> Alter system set db_recovery_file_dest=’/u01/flash’ scope=spfile;
SQL> Alter system set db_recovery_file_dest_size=10G scope=spfile;
Cancel the recovery.
SQL> Alter database recover managed standby database cancel;
SQL> Shutdown immediate
Startup database in mount mode.
SQL> Startup mount
Put database in the flashback mode
SQL> alter database flashback on;
Create the Restore Point on the standby database
SQL> Create restore point before_dr guarantee flashback database;
Activate Standby database
SQL> Alter database activate standby database;
Open the DR database
SQL> Alter database open;
Shutdown and restart the database
--- Perform DR tests ---
Steps to rollback and convert to standby.
On Satndby
Shutdown after DR test
SQL>Shutdown immediate
Startup database with mount and force option.
SQL>startup mount force
Restore database to the restored point.
SQL> Flashback database to restore point before_dr;
Drop restored point
SQL> Drop restore point before_dr;
Turn off flashback
alter database flashback off;
Convert database to physical standby
SQL> Alter database convert to physical standby;
shutdown and startup in standby mode
SQL> Shutdown immediate
SQL> startup nomount
SQL> Alter database mount standby database;
SQL> Alter database recover managed standby database disconnect from session;
On Production
Enable the log shipping to standby.
SQL> Alter system set log_archive_dest_state_4='enable';
switch logg and verify the apply.
SQL> alter system switch logfile;
Server Utilities ..exp/imp , expdp , impdp, SQL*Loader ...
How to change the table name during import?
In 11g you can do this by using REMAP_TABLE
Following is the information from Oracle 11g documentation.
REMAP_TABLE
Default: There is no default
Purpose:
Allows you to rename tables during an import operation performed with the
transportable method.
Syntax and Description
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
You can use the REMAP_TABLE parameter to rename entire tables.
You can also use it to override the automatic naming of table partitions that were
exported using the transportable method. When partitioned tables are exported using
the transportable method, each partition and subpartition is promoted to its own table
and by default the table is named by combining the table and partition name (that is,
tablename_partitionname). You can use REMAP_TABLE to specify a name other than the
default.
Restrictions
•Only objects created by the Import will be remapped. In particular, preexisting tables
will not be remapped if TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND.
Example
The following is an example of using the REMAP_TABLE parameter to rename the
employees table to a new name of emps:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=hr.employees REMAP_TABLE=hr.employees:emps