Upload
praveen
View
220
Download
0
Embed Size (px)
Citation preview
7/24/2019 03b Controlling Resources Part 2
1/63
7/24/2019 03b Controlling Resources Part 2
2/63
Controlling resources in an Exadata environment
7/24/2019 03b Controlling Resources Part 2
3/63
Agenda
Exadata Security
Flash Cache and Log
Storage Indexes
Parallel Execution
7/24/2019 03b Controlling Resources Part 2
4/63
Agenda
Exadata Securi ty Flash Cache and Log
Storage Indexes
Parallel Execution
7/24/2019 03b Controlling Resources Part 2
5/63
EXADATA SECURITY
7/24/2019 03b Controlling Resources Part 2
6/63
Exadata Security Model
Exadata Storage Interconnect uses IP and InfiniBand Limit the storage connection to specific adapters
Defined when creating the cell
This is physical security, robust and easy to maintain
If systems (DB nodes) are connected to InfiniBand See Exadata cells
See / use (empty) grid disks unless Exadata Security is setup
Per-database access-control is possible
Open Security, ASM-Scoped, Database-Scoped Security
7/24/2019 03b Controlling Resources Part 2
7/63
Security setup on Exadata
Database nodes are customer responsibility No firewall setup
Password limitations in place
90 days, case restrictions, sizes, 5 attempts etc
Exadata storage cells are secured Password restrictions
90 days, case restrictions, sizes, 5 attempts etc
Firewall setup Only SSH access through Mgmt and IB network
OS activity monitored
Automatic log-off of SSH sessions
7/24/2019 03b Controlling Resources Part 2
8/63
Open Security (default)
No access restrictions Useful for environments without security requirements.
Default security mode after creating new storage cell No security setup required
Works out of the box.
No ACLs or access-limitations
Any database can access all grid-disks
Normal ASM protection is in place through headers
Mistakes (force addition) could happen
7/24/2019 03b Controlling Resources Part 2
9/63
ASM-Scoped Security
Security per ASM cluster ASM cluster identified by unique key in local file on DB node
Alias is defined in Exadata cell for each unique key
Griddisks are enabled for specific alias(es)
Only one key needs to be set up per ASM cluster Same key on all nodes in of the same ASM cluster
Every Griddisk needs to be enabled Aliases in properties of Griddisk (field AvailableTo)
7/24/2019 03b Controlling Resources Part 2
10/63
Database-Scoped Security
Each database needs different key
Each database, not each instance !
Make sure all DB in a cluster have same key setup
If no key has been setup, fallback to ASM security
If wrong key has been setup, access is denied
Access per GridDisk (availableTo) field
Each cell gets alias for unique key
Access by setting alias(es) in the availableTo field
Multiple entries are allowed in the availableTo field ASM cluster used must have access too !
Both Database and ASM must have access to GridDisk
7/24/2019 03b Controlling Resources Part 2
11/63
Creating a Security Key
A new key can be generated on any cell:
CellCLI> CREATE KEY
66e12adb996805358bf82258587f5050
Use result on all cells en DB nodes Do not generate a key on each cell, one key is enough
Do not generate a key for each instance, just each database
User is responsible for copying the key to the right locations
7/24/2019 03b Controlling Resources Part 2
12/63
Setting up ASM-Scoped Security
1. Start with Open Security, tested and working
2. Shutdown all databases and ASM
3. Create a new key on any cell
4. Copy key file cellkey.ora to all hosts in cluster5. Copy key to all cells in the storage pool
6. Set up ACLs to allow ASM on all cells
7. Restart all databases and ASM
All databases using this ASM can access Griddisks.
No other user can access any of the Exadata storage.
7/24/2019 03b Controlling Resources Part 2
13/63
ASM-Scoped SecurityHow to setup ?
On any cell CellCLI> CREATE KEY
66e12adb996805358bf82258587f5050
Lookup the name of the ASM cluster
SQL> SHOW PARAMETER db_unique_name
Create cellkey.ora file on ASM node Located in /etc/oracle/cell/network-config/cellkey.ora
Permissions 640
#cellkey.ora file for ASM
key= (66e12adb996805358bf82258587f5050)asm= (+asm)realm= (my_realm)
7/24/2019 03b Controlling Resources Part 2
14/63
ASM-Scoped SecurityHow to setup ? (cont inued..)
On all of the the cells (not just one) ASSIGN KEY FOR +asm='66e12adb996805358bf82258587f5050'
Create new Grid disks with ACL by default CREATE GRIDDISK ALL PREFIX=sales, availableTo='+asm'
Or change ACL for existing cells ALTER GRIDDISK ALL availableTo='+asm'
ALTER GRIDDISK sales_CD_1_cell01, sales_CD_2_cell01,
sales_CD_3_cell01, sales_CD_4_cell01,sales_CD_5_cell01, sales_CD_6_cell01 availableTo='+asm'
7/24/2019 03b Controlling Resources Part 2
15/63
Setting Up Database-Scoped Security
1. Start with ASM-Scoped Security, tested and working.
2. Shutdown all databases and ASM
3. Create a key for each database (not each instance)
4. Copy key file cellkey.ora to each database home5. Copy key to all cells in the storage pool
6. Set up ACLs to allow each database, as needed.
7. Restart all databases and ASM
Each database can access its own grid-disks.If a key is defined, no other user can access any of theExadata storage.
7/24/2019 03b Controlling Resources Part 2
16/63
Database-Scoped SecurityHow to setup ?
Create new key on any cell CellCLI> CREATE KEY
51a826646ebe1f29e33c6ed7c4965c9a
Create cellkey.ora file on Database node
Located in$ORACLE_HOME/admin//pfile//cellkey.ora
Permissions 640
#cellkey.ora file for Database
key= (51a826646ebe1f29e33c6ed7c4965c9a)asm= (+asm)realm= (my_realm)
7/24/2019 03b Controlling Resources Part 2
17/63
Database-Scoped SecurityHow to setup ? (cont inued..)
On all of the cells (not just one !) ASSIGN KEY FOR PROD='51a826646ebe1f29e33c6ed7c4965c9a'
Use DB_UNIQUE_KEY for key alias !
Create new Grid disks with ACL by defaultCREATE GRIDDISK ALL PREFIX=sales, availableTo='+asm,PROD'
Or change ACL for existing cells ALTER GRIDDISK ALL availableTo='+asm,PROD'
ALTER GRIDDISK sales_CD_1_cell01, sales_CD_2_cell01,
sales_CD_3_cell01, sales_CD_4_cell01,
sales_CD_5_cell01, sales_CD_6_cell01availableTo='+asm,PROD
Make sure you repeat existing values in the availableTO ! Alter GRIDDISK will overwrite every value previous set
7/24/2019 03b Controlling Resources Part 2
18/63
Adding/Removing Databases
Keep cellkey.ora in each database directory It doesnt hurt, makes the environment more secure
Retrieve list of defined keys from the cells CellCLI> LIST KEY
+asm 66e12adb996805358bf82258587f5050
PROD b67d5587fe728118af47c57ab8da650a
Issue ALTER GRIDDISK to change the availableTofield Remember that ALTER GRIDDISK overwrites the contents of
the availableTo field !
7/24/2019 03b Controlling Resources Part 2
19/63
Database-Scoped to ASM-Scoped
1. Shutdown all databases and ASM
2. Remove all database keys, from cellsCellCLI> ASSIGN KEY FOR PROD=''
3. Delete cellkey.ora from database directory(s)
4. Remove all databases from Grid-Disk availableToCellCLI> ALTER GRIDDISK ALL availableTo='+asm'
5. Restart ASM and databases.
7/24/2019 03b Controlling Resources Part 2
20/63
ASM-Scoped to Open Security
1. Shutdown all databases and ASM
2. Remove ASM keys, from cellsCellCLI> ASSIGN KEY FOR +asm=''
3. Delete cluster cellkey.ora from hostsrm /etc/oracle/cell/network-config/cellkey.ora
4. Remove ASM from Grid-Disk availableToCellCLI> ALTER GRIDDISK ALL availableTo=''
5. Restart ASM and databases.
7/24/2019 03b Controlling Resources Part 2
21/63
Exadata Security Best Practices
When configuring security, the flow is always
Open security to
ASM-scoped security to
Database-scoped security.
Make sure disk security setup is consistent
All disks in an ASM group and all disks for a database
When using RAC, cellkey.ora and permissions are vital
Make sure they are the same on any node in the cluster.
Dont mix ASM and DB scoped security in a system
Use DCLI to automate security setup
7/24/2019 03b Controlling Resources Part 2
22/63
Exadata Security Summary
Prevent access to Grid-Disks using keys Key on DB nodes level for ASM scoped Security
Key on DB Home directory for DB scoped Security
All nodes in ASM cluster or all DB Instances use the same key
Create alias for keys on Storage cell level
Setup is per Griddisk on Storage Cell level Availableto: field needs both ASM and DB level access
Shutdown all instances before making changes One wrong command can cause issues in the database
7/24/2019 03b Controlling Resources Part 2
23/63
Q U E S T I O N SA N S W E R S
On Exadata Security
7/24/2019 03b Controlling Resources Part 2
24/63
Agenda
Exadata Security
Flash Cache and Log Storage Indexes
Parallel Execution
7/24/2019 03b Controlling Resources Part 2
25/63
FLASH CACHE
7/24/2019 03b Controlling Resources Part 2
26/63
5TB of Flash Memory
Smart Flash Cache
Million I/Os per Second
Sub-Millisecond I/O time
50GB/sec Flash Scan
3x to 50x Compression on Flash
Worlds First Flash Database Machine
7/24/2019 03b Controlling Resources Part 2
27/63
Semiconductor Cache Hierarchy
Massive throughput and IOsthrough innovative CacheHierarchy
Database DRAM Cache 400GB raw capacity Up to 4TB compressed user data
100 GB/sec
Exadata Smart Flash Cache
5TB raw capacity Up to 50TB compressed user data
50 GB/sec raw scan
1 million IO/sec
Exadata disks 100TB or 336TB raw Up to 500TB compressed user data
21 GB/sec scan
50K IO/sec
7/24/2019 03b Controlling Resources Part 2
28/63
Sun Flash Accelerator F20 PCIe Card
96GB Storage Capacity
4 x 24GB Flash modules/doms 6GB reserved for failures
x8 PCIe card
Super Capacitor backup
Optimized for Database caching Fully integrated into the Exadata V2
database machine.
Measured end-to-end performance
3.6GB/sec/cell or 50GB/sec per DBM 75,000 IOPs/cell or 1 million IOPS per DBM
7/24/2019 03b Controlling Resources Part 2
29/63
How can we use Flash Storage
Exadata Smart Flash Cache
Store frequently used blocks (data and index) on Flash storage
Exadata Smart Flash Logging Use Flash as a buffer for your redolog writes
Exadata Flash-Based Storage Uses flash as permanent high speed storage
Or ALL of the above As we have 4 cards with 96GB of Flash memory
We can split cards into separate sections with different functionality
7/24/2019 03b Controlling Resources Part 2
30/63
Flash storage as Flash Cache
What is Cached? Frequently accessed data and index blocks
Control file reads and writes
File header reads and writes
DBA can also influence caching priorities
What is Not Cached? I/O to mirror copies
Backup related I/O Data Pump I/O
Data file formatting
Table scans dont monopolize the cache
7/24/2019 03b Controlling Resources Part 2
31/63
Flash Storage as Flash Logging
Redologs are sequential writes Disk DRAM cache usually as fast (or faster) than Flash writes
Provided Disk cache gets flushed fast enough
Can be bottleneck on high write workloads
Redolog will write to Flash Cache and DRAM cache At the same time
First one to finish will signal write complete to the DB
Starting 11.2.2.4.0 Cell Patchset / 11.2.0.2 BP11 DB version Can be turned on or off via the IO Resource Manager
7/24/2019 03b Controlling Resources Part 2
32/63
Flash logging example
7/24/2019 03b Controlling Resources Part 2
33/63
Flash Storage as Datafile storage
Using Flash as DB storage reduces Flash space 2x when using Normal Redundancy
3x when using High Redundancy
Only use for heave scattered reads and writes Sequential writes are usually faster using
Lots of disk and disk cache
Dedicated disks so disk-arm does not have to move
Can be used in a partitioning or ILM scheme
Most (almost all) customers do not use this feature
7/24/2019 03b Controlling Resources Part 2
34/63
Flash Best Practices
Use Flash only for production Unless you have enough (speed and space) for everything
Use Flash logging for redo-log bottlenecks
Use Flash Cache and Logging instead of File storage Use all available Flash space
Most performance benefit will be from reading and redologs
Only change after AWR reports issues with small writes
Use as much for Cache and Logging as possible If you need file storage, only use the part of Flash you need
Keep the other parts for Cache and Logging
7/24/2019 03b Controlling Resources Part 2
35/63
Exadata Smart Flash Cache
Commands
Creating Flash Cache by default CREATE CELL flashcache
Creating Flash Cache explicitly CREATE FLASHCACHE ALL size=100g
Users can prioritize caching of tables ALTER TABLE table_nameSTORAGE
(CELL_FLASH_CACHE KEEP)
Without the setting, individual tables and indexes are cached
Display Flash Cache Configuration LIST FLASHCACHE DETAIL
7/24/2019 03b Controlling Resources Part 2
36/63
Control access to flash cache
Prevent databases to access flash cache Low priority databases
Test databases
Etc
New attribute introduced for IORM: FlashCache
Cellcli> ALTER IORMPLAN
dbplan=((name=prod, flashCache=on),
(name=dev, flashCache=on),
(name=test, flashCache=off),
(name=other, flashCache=off))
7/24/2019 03b Controlling Resources Part 2
37/63
Exadata Flash Summary
5+ Terabytes of Flash available in Exadata Machine About 400Mb per Exadata Storage Cell
No controller bottlenecks; cards instead of disks
3.6GB/sec/cell or 50GB/sec per DBM
75,000 IOPs/cell or 1 million IOPS per DBM
Use it as.. Exadata Flash Cache
Exadata Flash Logging Exadata Flash file storage
Needs Normal of High Redundancy
Only use it if AWR report shows issues with scattered writes
7/24/2019 03b Controlling Resources Part 2
38/63
Q U E S T I O N SA N S W E R S
On Exadata Flash storage
7/24/2019 03b Controlling Resources Part 2
39/63
Agenda
Exadata Security
Flash Cache and Log
Storage Indexes
Parallel Execution
7/24/2019 03b Controlling Resources Part 2
40/63
STORAGE INDEXES
7/24/2019 03b Controlling Resources Part 2
41/63
Biggest performance Bottleneck
Disks are a big bottleneck for a database Its all about getting to data as fast as possible
Flash disks are much faster than normal disks
Normal databases need to read data To determine if the where clause applies to the data
More data is read than needed
Indexes help a lot
But not-reading unneeded data would be faster !
7/24/2019 03b Controlling Resources Part 2
42/63
Introducing Exadata Storage Indexes
Memory based index of contents of blocks Index automatically build using memory of the Exadata cells
Information about a block only stored in one place
For each column in frequently accessed blocks Minimum and Maximum value is stored
Occurrence of NULL values is stored
Values automatically stored after first read of the block
Values in RAM Take only small amount of memory
Destroyed when power is removed (reboot, crash etc)
7/24/2019 03b Controlling Resources Part 2
43/63
A B C D
3
1
5
Min B = 1
Max B = 5
Min B = 3
Max B = 8
A B C D
5
8
3
Min B = 6
Max B = 9
A B C D
6
7
9
How do Storage Indexes workAn example..
Only block 1 matches
Other blocks do not need to be retrieved Saves huge amounts of IO per second
Only works with Smart Scans / Direct Reads
Block 1 Block 2 Block 3
select * from table where B < 2
7/24/2019 03b Controlling Resources Part 2
44/63
Storage Index with Partitions Example
Queries on Ship_Date do not benefit from Order_Date partitioning
However Ship_date and Order# are highly correlated with Order_Date
e.g. Ship dates are usually near Order_Dates and are never less
Storage index provides partition pruning like performance for queries onShip_Date and Order#
Takes advantage of ordering created by partitioning or sorted loading
Order# Order_DatePartitioning Column
Ship_Date Item
1 2007 2007
2 2008 2008
3 2009 2009
Orders Table
7/24/2019 03b Controlling Resources Part 2
45/63
Measuring performance
Use V$SYSSTAT for information cell physical IO bytes saved by storage index
7/24/2019 03b Controlling Resources Part 2
46/63
Storage Indexes - Summary
In memory storage of block characteristics Highest and Lowest value of each column in a block
Build when the block is accessed the first time
Stays in memory
Destroyed when cellsrv is down (power down, reboot etc) No communication between servers
Low memory footprint
Cannot turn it on or off
No extra license fee
7/24/2019 03b Controlling Resources Part 2
47/63
Q U E S T I O N SA N S W E R S
On Storage Indexes
7/24/2019 03b Controlling Resources Part 2
48/63
Agenda
Exadata Security
Flash Cache and Log
Storage Indexes
Paral lel Execu tion
7/24/2019 03b Controlling Resources Part 2
49/63
PARALLEL EXECUTION
A d D f P ll li
7/24/2019 03b Controlling Resources Part 2
50/63
Automated Degree of ParallelismAn 11gR2 generic featu re
In the past, tuning parallelism was a manual process one degree of parallelism does not fit all queries
too much parallelism could flood the entire system
Automated Degree of Parallelism automatically
decides If a statement will execute in parallel or not (serial executionwould take longer than a set threshold30 secs)
What degree of parallelism the statement will use
Optimizer derives the DoP from the statement based
on resource requirements Uses the cost of all scan operations
Balanced against a max limit of parallelism
A t t d D f P ll li
7/24/2019 03b Controlling Resources Part 2
51/63
SQLstatement
Statement is hard parsedAnd optimizer determines
the execution plan
Statement
executes serially
Statementexecutes in parallel
Optimizer determines
ideal DOP
If estimated time
greater than threshold
Actual DOP = MIN(default DOP, ideal DOP)If estimated time less
than thresholdPARALLEL_MIN_TIME_THRESHOLD
Automated Degree of ParallelismHow it works
P ll l St t t Q i
7/24/2019 03b Controlling Resources Part 2
52/63
Parallel Statement Queuing
Automatic Degree of Parallelism means More statements will run in parallel
Potential system thrashing due to too many processes
Parallel Statement Queuing automatically decides if a
statement can execute immediately or not When a parallel statement starts it checks if enough
parallel servers (aka Parallel Query Slaves) areavailable
If not, queue the statement Once enough parallel servers become available, statement is
de-queued and executed
P ll l St t t Q i
7/24/2019 03b Controlling Resources Part 2
53/63
SQLstatements
Statement is parsedand Oracle automatically
determines DOP
If enough parallel
servers available
execute immediately
If not enough parallelservers available queue
128163264
8
FIFO Queue
When the required
number of parallel servers
become available the first
stmt on the queue is
dequeued and executed
128
163264
Parallel Statement QueuingHow it works
P ll l St t t Q i
7/24/2019 03b Controlling Resources Part 2
54/63
Awaiting screen
shot from EM
Clocksymbolindicated aqueuedstatement
Click on theSQL ID formore info
Parallel Statement QueuingMonitoring in Enterprise Manager
P ll l St t t Q i
7/24/2019 03b Controlling Resources Part 2
55/63
Wait eventindicatesstmt is atthe head ofthe queue
Parallel Statement QueuingMonitoring in Enterprise Manager
I M P ll l E ti
7/24/2019 03b Controlling Resources Part 2
56/63
Real Application
Clusters
In-Memory Parallel Execution
Data warehouse environmentscan have large amounts ofmemory that is not always used
An algorithm places fragmentsof an object (partitions) in
memory on different nodes Compression gets more data in
memory
Parallel servers (aka PQSlaves) are then executed on
the corresponding nodes
Requires Automatic Degree ofParallelism
I M P ll l E ti
7/24/2019 03b Controlling Resources Part 2
57/63
SQLstatement
Determine the size of the
table being looked at
Read into the buffer
cache on any node
Table is extremely small
Always use direct read
from disk
Table is a good candidate
for In-Memory ParallelExecution
Table is
extremely Large
Fragments of Table areread into each nodes
buffer cache
Only parallel server on
the same RAC node
will access each
fragment
In-Memory Parallel ExecutionHow it works
I M P ll l E ti
7/24/2019 03b Controlling Resources Part 2
58/63
In-Memory Parallel Execution
A single database machine has over400GB of memory usable for caching
Database release 11.2 introduces parallelquery processing on memory cached data
Harnesses memory capacity of entire database clusterfor queries
Foundation for world record 1TB TPC-H
Exadata Hybrid Columnar Compressionenables multi-terabyte tables or partitionsto be cached in memory
315,842
1,018,321
1,166,976
ParAccel Exasol Oracle & HP
Exadata
QphH: 1 TB TPC-H
Faster than specialized in-memorywarehouse databases
Memory has 100x more bandwidth than Disk
Source: Transaction Processing Council, as of 9/14/2009:
Oracle on HP Bladesystem c-Class 128P RAC, 1,166,976 QphH@1000GB, $5.42/QphH@1000GB, available 12/1/09.Exasol on PRIMERGY RX300 S4, 1,018,321 QphH@1000GB, $1.18/QphH@1000GB, available 08/01/08.
ParAccel on SunFire X4100315 842 hH 1000GB $4.57 / hH 1000GB available 10/29/07.
Paralellism S mmar
http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=107102903http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=107102903http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=107102903http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=1071029037/24/2019 03b Controlling Resources Part 2
59/63
Paralellism - Summary
Parallelism is important Speed up queries, uses all resources available
No need for Parallelism tuning Needs init.ora parameter
Optimizer decides on parallelism based on statistics
No need for Parallelism limitation Statements will wait for free slaves
Prevents overload on the system
Queries are still faster than single threaded execution Parallelism can cause in-memory behaviour
Qualified tables can be stored in memory automatically
7/24/2019 03b Controlling Resources Part 2
60/63
Q U E S T I O N SA N S W E R S
On Storage Indexes
7/24/2019 03b Controlling Resources Part 2
61/63
Q U E S T I O N SA N S W E R S
7/24/2019 03b Controlling Resources Part 2
62/63
Now you do it !
7/24/2019 03b Controlling Resources Part 2
63/63