70

Melhores práticas de administração e manutenção … ponto de vista da Oracle, uma instalação RAC estendido é utilizado logo que os dados (usando o Oracle ASM) são espelhado

Embed Size (px)

Citation preview

Melhores práticas de administração e manutenção do Oracle RAC Um resultado de verdadeira colaboração

Ricardo Gonzalez Senior Product Manager Real Application Clusters, Development 24 de Junho, 2015

Oracle Confidential – Internal/Restricted/Highly Restricted Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Safe Harbor Statement

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

3

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 4

Melhores Práticas Operacionais

Atualização

Instalação

t SI

http://www.slideshare.net/MarkusMichalewicz/oracle-rac-12c-collaborate-best-practices-ioug-2014-version

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage

S.O.

Rede

Cluster

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC 12.1.0.2 (Instalação)

Clusters Genéricos

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

5

5

Melhores Práticas Operacionais

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC 12.1.0.2 (Instalação)

Clusters Genéricos

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

6

5

Melhores Práticas Operacionais

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

12.1.0.1 Grid Infrastructure Management Repository (GIMR)

12.1.0.2 Grid Infrastructure Management Repository (GIMR)

• Cria banco 12c

– Instância única, Container Database (CDB),

com um Pluggable Database (PDB)

– O nome do recurso é “ora.mgmtdb”

– Consolidação prevista para o futuro

– Instalado em um dos (HUB) nodes

– Gerenciado como banco de dados de failover

– Armazena as métricas do Sistema Operacional recolhidas pelo Health Cluster Monitor

– Armazenado no primeiro Grupo de Discos criado no ASM

7

Novo na instalação do 12.1.0.2 GIMR: Criado automaticamente

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 8

Recomendação: Mudança na criação dos Grupos de Discos 12.1.0.1 Criação dos grupos de discos: Comece com o grupo de disco “GRID"

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

12.1.0.2 Criação dos grupos de discos : comece com o grupo de disco “GIMR”

• O GIMR caracteristicamente não requer redundância no grupo de discos.

– Portanto, não compartilhe com o GRID DG.

• Arquivos do Clusterware (Voting Files e OCR) são fáceis de mudar

• Mais informações:

– How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)

– Managing the Cluster Health Monitor Repository (Doc ID 1921105.1)

9

Recomendação: Mudança na criação dos Grupos de Discos

Exemplo no Apêndice A

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

12.1.0.1 Use Standard Cluster 12.1.0.2: Use Flex Cluster (inclui Flex ASM por padrão)

10

Novo no 12.1.0.2: Recomendação de usar Flex Cluster

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

12.1.0.2: Use Flex Cluster (inclui Flex ASM por padrão)

11

Novo no 12.1.0.2: Recomendação de usar Flex Cluster

Se estiver instalando um Cluster Oracle com

RAC Estendido ?

Mais informações no Apêndice D

Use Standard Cluster

+ Flex ASM

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Instale o que é necessário Configure o que é desejado (atualize depois)

12

Nova Flexibilidade de Rede no 12.1.0.2 – Recomendação

Mais informações no Apêndice B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 13

Automatic Diagnostic Repository (ADR)

ADR_base

diag

asm rdbms tnslsnr clients crs (others)

• Oracle Grid Infrastructure agora suporta o Automatic Diagnostic Repository

• ADR simplifica o análise dos arquivos de log

• Centraliza a maioria dos logs em uma estrutura de diretório padronizada

• Mantêm histórico dos logs.

• Possui ferramenta de linha de comando para gerenciar as informações de diagnóstico

Mais informações no Apêndice C

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC12.1.0.2 (Instalaçao)

Melhores Práticas Operacionais para

Clusters Genéricos

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

14

5

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 15

Melhores Práticas Operacionais – Clusters Genéricos

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage

S.O.

Rede

Cluster

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A

S.O.

Rede

Cluster

DB

Escopo

16

Clusters Genéricos → Armazenamento

Passo 1: Criar grupo de disco “GRID/GIMR”– Cluster Genérico

Passo 3: Mover arquivos ASM: SPFILE / Password

Passo 2: Mover Arquivos de Clusterware

Mais informações no Apêndice A

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Evite alta utilização da memória!

Use o Memory Guard

Ativado por padrão com 12.1.0.2

17

Use Discos de Estado Sólido (SSDs) para hospedar o swap

Detalhes no “My Oracle Support” (MOS)

Nota MOS: 1671605.1 – “Use Solid State Disks to host swap space in order to

increase node availability”

Use HugePages para a SGA (Linux)

Notas MOS: 361323.1 / 401749.1

Avoid Transparent HugePages (Linux6) Nota MOS: 1557478.1

Clusters Genéricos → Sistema Operacional → Memória

brasil germany Oracle GI

Oracle RAC Oracle GI

Oracle RAC

brasil germany Oracle GI

Oracle RAC Oracle GI

Oracle RAC

Swapping

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

• OraChk

– Substitui RACcheck

– Também conhecido como ExaChk

• RAC Configuration Audit Tool

– Nota MOS: 1268927.1

• Verifica “Oracle” (Banco de Dados):

– Banco de Dados de instância única

– Grid Infrastructure & Oracle RAC

– Uso da Maximum Availability Architecture (MAA) - (se configurada)

– Configuração do Hardware Oracle

18

Clusters Genéricos → Sistema Operacional → OraChk e TFA

Trace File Analyzer

Nota MOS: 1513912.1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 19

TFA – Eficiência de A à Z

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 20

Clusters Genéricos → Sistema Operacional (Resumo)

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Rede

Cluster

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Defina como “normal”

21

Configure o tamanho do Interconnect para throughput

agregado

Use redundância (HAIPs) para balanceamento de carga

Use subnets diferentes para o interconnect

Use Jumbo Frames, sempre que possível

Garanta o suporte de toda a infra-estrutura

Clusters Genéricos → Rede

Mais informações no Apêndice B

Receive()

brasil germany

8K Data Block

1500 byte MTU

Send()

Remontagem Fragmentação

Oracle RAC Oracle RAC

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 22

Clusters Genéricos → Rede

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Rede Como discutido + Apêndice B

Cluster

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 23

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Rede Como discutido + Apêndice B

Cluster Apêndice D

DB

Escopo

Clusters Genéricos → Cluster

1: Instalar / manter HUBs, adicione Leaf Nodes

3: Use Leaf nodes para casos de uso não-DB

2: Adicionando nodes ao Cluster

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC12.1.0.2 (Instalaçao)

Melhores Práticas Operacionais para

Clusters Genéricos

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

24

5

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 25

Melhores Práticas Operacionais – Cluster Estendido

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage

S.O.

Rede

Cluster

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 26

Oracle RAC Estendido

Do ponto de vista da Oracle, uma instalação RAC estendido é utilizado logo que os dados (usando o Oracle ASM) são espelhado entre storage arrays

independentes. (Células Exadata Storage estão excluídos desta definição.)

ER: aberto para fazer o “ORACLE RAC ESTENDIDO" UMA CONFIGURAÇAO DISTINGUIVEL

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 27

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Rede Como discutido + Apêndice B

Cluster Apêndice D

DB

Escopo

Clusters Genéricos → Armazenamento

Passo 1: Criar grupo de disco “GRID/GIMR”– Cluster Estendido

Passo 3: Mover arquivos ASM: SPFILE / Password

Passo 2: Mover Arquivos de Clusterware

Passo 4: “srvctl modify asm –count all”

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 28

Cluster Estendido → Sistema Operacional

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Igual aos Clusters Genéricos

Rede Como discutido + Apêndice B

Cluster Apêndice D

DB

Escopo

Mais informações: Oracle Real Application Clusters on Extended Distance Clusters (PDF)

http://www.oracle.com/technetwork/database/options/

clustering/overview/extendedracversion11-435972.pdf

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Defina “normal”

O objetivo em uma configuração de RAC estendido é esconder a distância.

Qualquer aumento de latência pode (!) afetar o desempenho do aplicativo.

29

VLANs são totalmente suportadas para o Oracle RAC

Separação de sub-rede vertical não é suportado.

Cluster Estendido → Rede

Mais informações: Oracle Real Application Clusters on Extended Distance Clusters (PDF)

http://www.oracle.com/technetwork/database/options/ clustering/overview/extendedracversion11-435972.pdf

Mais informações: Oracle Real Application Clusters on Extended Distance Clusters (PDF)

http://www.oracle.com/technetwork/database/ database-technologies/clusterware/overview/ interconnect-vlan-06072012-1657506.pdf

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 30

Cluster Estendido → Rede Clusters

Gernéricos Cluster

Estendido Dedicados

(OLTP / DWH) Ambientes

Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Igual aos Clusters Genéricos

Rede Como discutido + Apêndice B

Como Discutido +Apêndice B

Cluster Apêndice D

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 31

Extended Cluster → Cluster Clusters

Gernéricos Cluster

Estendido Dedicados

(OLTP / DWH) Ambientes

Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Igual aos Clusters Genéricos

Rede Como discutido + Apêndice B

Como Discutido +Apêndice B

Cluster Apêndice D Igual aos Clusters Genéricos

DB

Escopo

Lembre-se: O objetivo em uma configuração de RAC estendido é esconder a distância. .

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC12.1.0.2 (Instalaçao)

Melhores Práticas Operacionais para

Cluster Genérico

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

32

5

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 33

Melhores Práticas Operacionais – Ambientes Dedicados

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage

S.O.

Rede

Cluster

DB

Escopo

Apenas alguns itens a considerar .

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 34

Ambientes Dedicados → Rede

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Igual aos Clusters Genéricos

Rede Como discutido + Apêndice B

Como Discutido +Apêndice B

Como Discutido +Apêndice B

Cluster Apêndice D Igual aos Clusters Genéricos

DB

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Problema: Aplicação de patches e atualizações

35

Problema: O consumo de memória Problema: Número de conexões

Ambientes Dedicados → Banco de Dados

brasil germany

Connection Pool

Oracle GI Oracle RAC

Oracle GI Oracle RAC

Solução: Rapid Home Provisioning Solução: Memory Caps Solução: vários, usando connection pools na maior parte

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Novo em Oracle Database 12c:

• SGA e PGA aggregated targets podem ser limitados.

• Veja documentação para “PGA_AGGREGATE_LIMIT”

36

Ambientes Dedicados → Banco de Dados

[DB]> sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 18 18:57:30 2014 … SQL> show parameter pga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ pga_aggregate_limit big integer 2G pga_aggregate_target big integer 211M SQL> show parameter sga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean TRUE sga_max_size big integer 636M sga_target big integer 636M unified_audit_sga_queue_size integer 1048576

1. Não lidar Connection Storms, impedi-las.

2. Limitar o número de conexões com o banco de dados.

3. Usar Connection Pools sempre que possível: • Oracle Universal Connection Pool (UCP) -

http://docs.oracle.com/database/121/JJUCP/rac.htm#JJUCP8197

4. Tenha certeza que os aplicativos fechem suas conexões • Se o número de conexões ativas é bastante inferior ao

número de conexões abertas, considere usar “Database Resident Connection Pooling” - docs.oracle.com/database/121/JJDBC/drcp.htm#JJDBC29023

5. Se você não pode impedir a Connection Storm, retardá-la • Use parâmetros do LISTENER para mitigar os efeitos

secundários negativos de uma Connection Storm. A maioria destes parâmetros também podem ser utilizados com SCAN.

6. Os serviços podem ser atribuídos a uma sub-rede de cada vez. Você controla a sub-rede, você controla o serviço.

Connection Pool

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 37

Ambientes Dedicados → Banco de Dados Clusters

Gernéricos Cluster

Estendido Dedicados

(OLTP / DWH) Ambientes

Consolidados

Storage Apêndice A Apêndice A

S.O. Configuração de Memória + OraChk / TFA

Igual aos Clusters Genéricos

Rede Como discutido + Apêndice B

Como Discutido +Apêndice B

Como Discutido +Apêndice B

Cluster Apêndice D Igual aos Clusters Genéricos

DB Como Discutido

Escopo

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Agenda 1

2

3

4

Novo no Oracle RAC12.1.0.2 (Instalaçao)

Melhores Práticas Operacionais para

Cluster Genérico

Cluster Estendido

Ambientes Dedicados

Ambientes Consolidados

Apêndices A – D

38

5

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 39

Melhores Práticas Operacionais – Ambientes Dedicados

Clusters Gernéricos

Cluster Estendido

Dedicados (OLTP / DWH)

Ambientes Consolidados

Storage

S.O.

Rede

Cluster

DB

Escopo

Também tem penas alguns itens a considerar

.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Consolidação de Banco de Dados

• Várias instâncias de banco de dados em execução em um servidor

•Necessidade de gerenciar várias instâncias de memória

• Use Instance Caging e QoS (em cluster de RAC)

40

Use Oracle Multitenant

• Menor número de instâncias para gerenciar (CDB)

• A alocação de memória do servidor é simplificada

• Instance Caging pode não ser necessário

• QoS continua benéfico para gerenciamento de recursos

Ambientes Consolidados – Sem VMs 2 Principais Escolhas

Brasil germany

racdb1_3

Oracle GI

Oracle RAC

Oracle GI

Oracle RAC

brasil

chile germany Oracle GI | HUB Oracle GI | HUB

Oracle GI | HUB

Oracle RAC Oracle RAC

italy Oracle GI | HUB

Oracle RAC

cons

Oracle RAC

cons1_2

cons1_1

CPU_Count=5

CPU_Count=3

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 41

Use Oracle Multitenant

• Pode ser operado como um ambiente dedicado,

• pelo menos a partir da perspectiva do cluster,

• se apenas 1 instância CDB por servidor é usada

Ambientes Consolidados – Faça deles Dedicados …

Informações adicionais: http://www.oracle.com/technetwork/database/focus-areas/database-cloud/database-cons-best-practices-1561461.pdf

http://www.oracle.com/technetwork/database/options/ clustering/overview/rac-cloud-consolidation-1928888.pdf

Brasil germany

racdb1_3

Oracle GI

Oracle RAC

Oracle GI

Oracle RAC

brasil

chile germany Oracle GI | HUB Oracle GI | HUB

Oracle GI | HUB

Oracle RAC Oracle RAC

italy Oracle GI | HUB

Oracle RAC

cons

Oracle RAC

cons1_2

cons1_1

CPU_Count=5

CPU_Count=3

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 42

Ambientes Consolidados – Resumo de Banco de Dados

Caso de Uso Área

Clusters Genéricos

Cluster Estendido

Dedicado (OLTP / DWH)

Ambientes Consolidados

Armazenamento Apêndice A Apêndice A

Sistema Operativo

Configuração de Memória + OraChk /

TFA

O mesmo que para Clusters

Genéricos

Rede Como

Discutido +Apêndice B

Como Discutido

+Apêndice B

Como Discutido

+Apêndice B

Como Dedicado +

Como Discutido

Cluster Apêndice A Como Genérico

Banco de Dados Como Discutido

As above

Especificamente para Oracle Multitenant em Oracle RAC, veja: http://www.slideshare.net/MarkusMichalewicz/oracle-

multitenant-meets-oracle-rac-ioug-2014-version

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 43

Para mais informações

http://community.oracle.com/blogs/raclatino

[email protected]

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Appendix A Creating “GRID” disk group to place the Oracle Clusterware files and the ASM files

Oracle Confidential – Internal/Restricted/Highly Restricted 44

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 45

Create “GRID” Disk Group – Generic Cluster

Use “quorum” whenever possible.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 46

Create “GRID” Disk Group – Extended Cluster

• More information: http://www.oracle.com/technetwork/database/options/clustering/overview/e

xtendedracversion11-435972.pdf

• Use logical names illustrating the disk destination • Use a quorum for ALL (not only GRID) disk groups used in an ExtendedCluster

• Use Voting Disk NFS destination

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Replace Voting Disk Location Add OCR Location

47

Move Clusterware Files

[GRID]> crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 8bec21793ee84fd3bfc6831746bf60b4 (/dev/sde) [GIMR] Located 1 voting disk(s).

[GRID]> crsctl replace votedisk +GRID Successful addition of voting disk 7a205a2588d44f1dbffb10fc91ecd334. Successful addition of voting disk 8c05b220cfcc4f6fbf5752b6763a18ac. Successful addition of voting disk 223006a9c28e4fd5bf3b58a465fcb66a. Successful deletion of voting disk 8bec21793ee84fd3bfc6831746bf60b4. Successfully replaced voting disk group with +GRID. CRS-4266: Voting file(s) successfully replaced

[GRID]> crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 7a205a2588d44f1dbffb10fc91ecd334 (/dev/sdd) [GRID] 2. ONLINE 8c05b220cfcc4f6fbf5752b6763a18ac (/dev/sdb) [GRID] 3. ONLINE 223006a9c28e4fd5bf3b58a465fcb66a (/dev/sdc) [GRID] Located 3 voting disk(s).

[GRID]> whoami Root

[GRID]> ocrconfig -add +GRID [GRID]> ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2984 Available space (kbytes) : 406584 ID : 759001629 Device/File Name : +GIMR Device/File integrity check succeeded Device/File Name : +GRID Device/File integrity check succeeded Device/File not configured ... Cluster registry integrity check succeeded Logical corruption check succeeded

Use “ocrconfig -delete +GIMR” if you want to “replace” and maintain a single OCR location.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Default ASM spfile location is in the first disk group created (here: GIMR)

Perform a rolling ASM instance restart facilitating Flex ASM

48

Move ASM SPFILE – See also MOS note 1638177.1

[GRID]> export ORACLE_SID=+ASM1 [GRID]> sqlplus / as sysasm …

SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ Spfile string +GIMR/cup-cluster/ASMPARAMETER FILE/registry.253.857666347 #Change location

SQL> create pfile='/tmp/ASM.pfile' from spfile; File created.

SQL> create spfile='+GRID' from pfile='/tmp/ASM.pfile'; File created. #NOTE:

SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ Spfile string +GIMR/cup-cluster/ASMPARAMETER FILE/registry.253.857666347

Use “gpnptool get” and filter for

“ASMPARAMETERFILE” to see updated ASM

SPFILE location in GPnP profile prior to

restarting.

[GRID]> srvctl status asm ASM is running on brasil,chile,germany [GRID]> srvctl stop asm -n germany -f [GRID]> srvctl status asm -n germany ASM is not running on germany [GRID]> srvctl start asm -n germany [GRID]> srvctl status asm -n germany ASM is running on germany [GRID]> crsctl stat res ora.mgmtdb NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on brasil

Perform rolling through cluster.

12c DB instances remain running!

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Default ASM shared password file location is the same as for the SPFILE (here +GIMR)

Path-checking while moving the file (online operation)

49

Move ASM Password File

[GRID]> srvctl config ASM ASM home: <CRS home> Password file: +GIMR/orapwASM ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM

GRID]> srvctl modify asm -pwfile +GRID/orapwASM [GRID]> srvctl config ASM ASM home: <CRS home> Password file: +GRID/orapwASM ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM

[GRID]> srvctl modify asm -pwfile GRID

[GRID]> srvctl config ASM ASM home: <CRS home> Password file: GRID ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM

[GRID]> srvctl modify asm -pwfile +GRID PRKO-3270 : The specified password file +GRID does not conform to an ASM path syntax

Use the correct ASM path syntax!

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Appendix B Creating public and private (DHCP-based) networks including SCAN and SCAN Listeners

50

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Step 1: Add network Result

51

Add Public Network – DHCP

[GRID]> oifcfg iflist eth0 10.1.1.0 eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.0.0

[GRID]> oifcfg setif -global "*"/10.2.2.0:public [GRID]> oifcfg getif eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public Only in OCR: eth1 10.2.2.0 global public PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> su Password:

[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype dhcp [GRID]> exit exit

[GRID]> srvctl config network -k 2 Network 2 exists Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes:

[GRID]> crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- … ora.net2.network OFFLINE OFFLINE brasil STABLE OFFLINE OFFLINE chile STABLE OFFLINE OFFLINE germany STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Step 2: Add SCAN / SCAN_LISTENER to the new network (as required)

Result

52

Add Public Network – DHCP

[GRID]> su Password:

[GRID]> srvctl update gns -advertise MyScan -address 10.2.2.20

# Need to have a SCAN name. DHCP network requires dynamic VIP resolution via GNS [GRID]> srvctl modify gns -verify MyScan The name "MyScan" is advertised through GNS.

[GRID]> srvctl add scan -k 2 PRKO-2082 : Missing mandatory option –scanname

[GRID]> su Password:

[GRID]> srvctl add scan -k 2 -scanname MyScan [GRID]> exit [GRID]> srvctl add scan_listener -k 2

[GRID]> srvctl config scan -k 2 SCAN name: MyScan.cupgnsdom.localdomain, Network: 2 Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp Subnet IPv6: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes:

[GRID]> srvctl config scan_listener -k 2 SCAN Listener LISTENER_SCAN1_NET2 exists. Port: TCP:1521 Registration invited nodes: Registration invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes: SCAN Listener LISTENER_SCAN2_NET2 exists. Port: TCP:1521 Registration invited nodes: Registration invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes: SCAN Listener LISTENER_SCAN3_NET2 exists. Port: TCP:1521 Registration invited nodes: Registration invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes:

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

oifcfg commands Result (ifconfig -a on HUB)

53

Add Private Network – DHCP

[GRID]> oifcfg iflist eth0 10.1.1.0 eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.0.0 eth3 172.149.0.0

[GRID]> oifcfg getif eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public Only in OCR: eth1 10.2.2.0 global public PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> oifcfg setif -global "*"/172.149.0.0:cluster_interconnect,asm [GRID]> oifcfg getif eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public * 172.149.0.0 global cluster_interconnect,asm PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

BEFORE eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:52 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20974 (20.4 KiB) TX bytes:4230 (4.1 KiB)

AFTER eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1161 errors:0 dropped:0 overruns:0 frame:0 TX packets:864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:720040 (703.1 KiB) TX bytes:500289 (488.5 KiB) eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:169.254.245.67 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

HAIPs will only be used for

Load Balancing once at least the DB / ASM instances, of

not the node is restarted. They are considered

for failover immediately.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

ifconfig -a on HUB – excerpt ifconfig -a on Leaf – excerpt

54

Side note: Leaf Nodes don’t host HAIPs!

eth2 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD inet addr:192.168.7.11 Bcast:192.168.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fead:dcfd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9303 errors:0 dropped:0 overruns:0 frame:0 TX packets:6112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8344479 (7.9 MiB) TX bytes:2400797 (2.2 MiB)

eth2:1 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD inet addr:169.254.190.250 Bcast:169.254.255.255 Mask:255.255.128.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.5 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4729 errors:0 dropped:0 overruns:0 frame:0 TX packets:5195 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1555796 (1.4 MiB) TX bytes:2128607 (2.0 MiB)

eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:169.254.6.142 Bcast:169.254.127.255 Mask:255.255.128.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 08:00:27:CC:98:C3 inet addr:192.168.7.15 Bcast:192.168.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fecc:98c3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7218 errors:0 dropped:0 overruns:0 frame:0 TX packets:11354 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2644101 (2.5 MiB) TX bytes:13979129 (13.3 MiB) eth3 Link encap:Ethernet HWaddr 08:00:27:06:D5:93 inet addr:172.149.2.6 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe06:d593/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6074 errors:0 dropped:0 overruns:0 frame:0 TX packets:5591 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2262521 (2.1 MiB) TX bytes:1680094 (1.6 MiB)

HAIPs on the interconnect are only used by ASM / DB instances. Leaf nodes do

not host those, hence, they do not host HAIPs. CSSD (the node management

daemon) uses a different redundancy approach.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Step 1: Add network Result

55

Add Public Network – STATIC

[GRID]> oifcfg iflist eth0 10.1.1.0

eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.128.0 eth3 172.149.0.0 eth3 169.254.0.0 #Assuming you have NO global public interface defined on subnet 10.2.2.0

[GRID]> oifcfg setif -global "*"/10.2.2.0:public [GRID]> oifcfg getif eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 172.149.0.0 global cluster_interconnect,asm

* 10.2.2.0 global public PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> su Password:

[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype STATIC

[GRID]> srvctl config network -k 2 Network 2 exists Subnet IPv4: 10.2.2.0/255.255.255.0/, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: [GRID]> crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- … ora.net2.network OFFLINE OFFLINE brasil STABLE OFFLINE OFFLINE chile STABLE OFFLINE OFFLINE germany STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Step 2: Add VIPs Result

56

Add Public Network – STATIC

[GRID]> srvctl add vip -node germany -address germany-vip2/255.255.255.0 -netnum 2 [GRID]> srvctl add vip -node brasil -address brasil-vip2/255.255.255.0 -netnum 2 [GRID]> srvctl add vip -node chile -address chile-vip2/255.255.255.0 -netnum 2 [GRID]> srvctl config vip -n germany VIP exists: network number 1, hosting node germany VIP Name: germany-vip VIP IPv4 Address: 10.1.1.31 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 2, hosting node germany VIP Name: germany-vip2 VIP IPv4 Address: 10.2.2.31 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

[GRID]> srvctl start vip -n germany -k 2 [GRID]> srvctl start vip -n brasil -k 2 [GRID]> srvctl start vip -n chile -k 2

[GRID]> srvctl status vip -n germany VIP germany-vip is enabled VIP germany-vip is running on node: germany VIP germany-vip2 is enabled VIP germany-vip2 is running on node: germany [GRID]> srvctl status vip -n brasil VIP brasil-vip is enabled VIP brasil-vip is running on node: brasil VIP brasil-vip2 is enabled VIP brasil-vip2 is running on node: brasil [GRID]> srvctl status vip -n chile VIP chile-vip is enabled VIP chile-vip is running on node: chile VIP chile-vip2 is enabled VIP chile-vip2 is running on node: chile

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Step 3: Add SCAN / SCAN_LISTENER to the new network (as required)

Result

57

Add Public Network – STATIC

#as root [GRID]> srvctl add scan -scanname cupscan2 -k 2 [GRID]> exit [GRID]> srvctl add scan_listener -k 2 -endpoints 1522 [GRID]> srvctl status scan_listener -k 2 SCAN Listener LISTENER_SCAN1_NET2 is enabled SCAN listener LISTENER_SCAN1_NET2 is not running [GRID]> srvctl start scan_listener -k 2

[GRID]> srvctl status scan_listener -k 2 SCAN Listener LISTENER_SCAN1_NET2 is enabled SCAN listener LISTENER_SCAN1_NET2 is running on node chile [GRID]> srvctl status scan -k 2 SCAN VIP scan1_net2 is enabled SCAN VIP scan1_net2 is running on node chile

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Appendix C Automatic Diagnostic Repository (ADR) support for Oracle Grid Infrastructure

58

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

• The ADR is a file-based repository for diagnostic data such as traces, dumps, the alert log, health monitor reports, and more.

• ADR helps preventing, detecting, diagnosing, and resolving problems.

• ADR comes with its own command line tool (adrci) to get easy access to and manage diagnostic information for Oracle GI + DB.

59

Automatic Diagnostic Repository (ADR) Convenience

ADR_base

diag

asm rdbms tnslsnr clients crs (others)

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

adrci adrci incident management

60

Some Management Examples

[GRID]> adrci ADRCI: Release 12.1.0.2.0 - Production on Thu Sep 18 11:35:31 2014 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. ADR base = "/u01/app/grid“ adrci> show homes ADR Homes: diag/rdbms/_mgmtdb/-MGMTDB diag/tnslsnr/germany/asmnet1lsnr_asm diag/tnslsnr/germany/listener_scan1 diag/tnslsnr/germany/listener diag/tnslsnr/germany/mgmtlsnr diag/asm/+asm/+ASM1 diag/crs/germany/crs diag/clients/user_grid/host_2998292599_82 diag/clients/user_oracle/host_2998292599_82 diag/clients/user_root/host_2998292599_82

[GRID]> adrci ADR base = "/u01/app/grid" … adrci> show incident; ADR Home = /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB: ************************************************************************* INCIDENT_ID PROBLEM_KEY CREATE_TIME -------------------- ----------------------------------------------------------- ---------------------------------------- 12073 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-08 17:44:56.580000 -07:00 36081 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-14 20:11:17.388000 -07:00 40881 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-16 15:30:18.319000 -07:00 …

adrci> set home diag/rdbms/_mgmtdb/-MGMTDB adrci> ips create package incident 12073; Created package 1 based on incident id 12073, correlation level typical

adrci> ips generate package 1 in /tmp Generated package 1 in file /tmp/ORA700ksk_20140918110411_COM_1.zip, mode complete

[GRID]> ls –lart /tmp -rw-r--r--. 1 grid oinstall 811806 Sep 18 11:05 ORA700ksk_20140918110411_COM_1.zip

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Binary / Log per Node Space Requirement

Grid Infra. (GI) Home ~6.6 GB

RAC DB Home ~5.5 GB

TFA Repository 10 GB

GI Daemon Traces ~2.6 GB

ASM Traces ~9 GB

DB Traces 1.5 GB per DB per month

Listener Traces 60MB per node per month

Total over 3 months • For 2 RAC DBs • For 100 RAC DBs

• ~43 GB •~483 GB

• Flex ASM vs. Standard ASM Flex Cluster vs. Standard Cluster

– Does not make a difference for ADR!

61

Space Requirements, Exceptions, and Rules

gnsd

ocssd ocssdrim

havip

exportfs NFS

helper

hanfs

ghc ghs

mgmtdb

agent

APX

gns

Some

OC4J

Logs

Some

GI home

Logs

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Appendix D Flex Cluster – add nodes as needed

62

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Initial installation: HUB nodes only Add Leafs later (addNode)

63

Recommendation: Install HUB Nodes, Add Leaf Nodes

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 64

Add “brasil” as a HUB Node – addNode Part 1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 65

Add “brasil” as a HUB Node – addNode Part 2

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. | 66

Add Leaf Nodes – addNode in Short Note: Leaf nodes do not

require a virtual node name (VIP). Application

VIPs for non-DB use cases need to be added

manually later.

Normal, can be ignored.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Database installer suggestion Consider Use Case

67

Continue to use Leaf Nodes for Applications in 12.1.0.2

Useful, if “spain” is likely to become a HUB at some point in time.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

DBCA Despite running Leaf Nodes

68

Continue to use Leaf Nodes for Applications in 12.1.0.2

[GRID]> olsnodes -s -t germany Active Unpinned brasil Active Unpinned chile Active Unpinned

italy Active Unpinned spain Active Unpinned

Copyright © 2015, Oracle and/or its affiliates. All rights reserved. |

Leaf Listener (OFFLINE/OFFLINE) Trace File Analyzer (TFA)

69

Some Examples of Resources running on Leaf Nodes

[grid@spain Desktop]$ . grid_profile [GRID]> crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE brasil STABLE ONLINE ONLINE chile STABLE ONLINE ONLINE germany STABLE

ora.LISTENER.lsnr ONLINE ONLINE brasil STABLE ONLINE ONLINE chile STABLE ONLINE ONLINE germany STABLE

ora.LISTENER_LEAF.lsnr OFFLINE OFFLINE italy STABLE OFFLINE OFFLINE spain STABLE ora.net1.network ONLINE ONLINE brasil STABLE ONLINE ONLINE chile STABLE ONLINE ONLINE germany STABLE

[GRID]> ps -ef |grep grid_1 root 1431 1 0 14:12 ? 00:00:19 /u01/app/12.1.0/grid_1/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/RATFA.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/je-5.0.84.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/ojdbc6.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/commons-io-2.2.jar oracle.rat.tfa.TFAMain /u01/app/12.1.0/grid_1/tfa/spain/tfa_home