40
Luci, Ricci and the RAC ...or Clustering on Centos 5 using Conga for an Oracle RAC install Drawn up by Fauz Ghauri (@fauzg) for barcamp

Luci, ricci and the rac bc

  • Upload
    fauzg

  • View
    1.418

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Luci, ricci and the rac bc

Luci, Ricci and the RAC

...or Clustering on Centos 5 using Conga for an Oracle RAC install

Drawn up by Fauz Ghauri (@fauzg) for barcamp

Page 2: Luci, ricci and the rac bc

Introduction

• Hardware

• Installation

• Custom udev rules for iscsi

• Cluster

• Heartbeat and fencing

• RAC Other things to consider for RAC.

• Acknowledgements

Page 3: Luci, ricci and the rac bc

Hardware

• Network Cards

– DRAC (Dell Remote Access Card)

– ILO (Integrated Lights Out – HP kit)

– VMware Fence (Built into ESX – maybe server?)

– Etc.

• SAN – Are you using iSCSI?

• VMware – which version of ESX/server do you run?

Page 4: Luci, ricci and the rac bc

Installation

• Mainly Defaults used for centos 5 install

• A few extra packages are required outside of the base install.

Page 5: Luci, ricci and the rac bc

Installation

Page 6: Luci, ricci and the rac bc

Installation

Page 7: Luci, ricci and the rac bc

Installation

Page 8: Luci, ricci and the rac bc

Installation

Page 9: Luci, ricci and the rac bc

Installation

Page 10: Luci, ricci and the rac bc

Installation

Page 11: Luci, ricci and the rac bc

Installation

Page 12: Luci, ricci and the rac bc

Installation – Conga Components

Page 13: Luci, ricci and the rac bc

Installation – Conga Components

Page 14: Luci, ricci and the rac bc

Installation – Luci

Page 15: Luci, ricci and the rac bc

Installation – Luci

Page 16: Luci, ricci and the rac bc

Custom udev rules for iSCSI

• If you’re using iSCSI – you’ll need to make sure that your drives are mapped.

# iscsiadm -m discovery -t sendtargets –p <sanIP.mydomain (or maybe ip address)>192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm3 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm4 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs

Page 17: Luci, ricci and the rac bc

Custom udev rules for iSCSI

• Manually log onto targets

– # iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.195 -l

• Configure auto login

– # iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.195 --op update -n node.startup -v automatic

Page 18: Luci, ricci and the rac bc

Custom udev rules for iSCSI

• # (cd /dev/disk/by-path; ls -l *sanname* | awk '{FS=" "; print $9 " " $10 " " $11}')

• Returns:– ip-192.168.2.195:3260-iscsi-iqn.2006-

01.com.openfiler:racdb.asm1-lun-0 -> ../../sda– Etc.– Make a note!– scsi_id –g –u –s /block/sdX – find out which letters map to

which iSCSI device (where X is a, b, c, etc)– Note iSCSI strings.– Create a new rules file in /etc/udev/rules.d/22-

randomnumberandname.rules

Page 19: Luci, ricci and the rac bc

Custom udev rules for iSCSI

• ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT==“<iscsi string goes here>", RUN+="/bin/raw /dev/raw/raw1 %N“

• ACTION=="add", KERNEL=="raw1", OWNER=="root", GROUP=="dba", MODE=="0660", SYMLINK+="oracle_ocr"

Page 20: Luci, ricci and the rac bc

Cluster

Page 21: Luci, ricci and the rac bc

Cluster

Page 22: Luci, ricci and the rac bc

Cluster

Page 23: Luci, ricci and the rac bc

Cluster

Page 24: Luci, ricci and the rac bc

Cluster

Page 25: Luci, ricci and the rac bc

Cluster

Page 26: Luci, ricci and the rac bc

Cluster

Page 27: Luci, ricci and the rac bc

Cluster

This turns yellow asthere’s an issue.

Page 28: Luci, ricci and the rac bc

Cluster

Page 29: Luci, ricci and the rac bc

Cluster

Page 30: Luci, ricci and the rac bc

Cluster

Page 31: Luci, ricci and the rac bc

Cluster

Page 32: Luci, ricci and the rac bc

Heartbeat & Fencing

• Heartbeat – saying “hi! I’m here” to the cluster.

• Fencing – fencing off a dodgy node

• Fencing on ESX means the creation of a user for fencing.

– useradd and groupadd – insufficient permissions.

– Get it working on ESX before trying to fence remotely.

– Adding permissions can be done by doing the following:

Page 33: Luci, ricci and the rac bc

Heartbeat & Fencing

– vmware-vim-cmd vimsvc/auth/role_add groupXVirtualMachine.Interact.PowerOn

– vmware-vim-cmdvimsvc/auth/entity_permission_addvim.Folder:ha-folder-root userX users groupXtrue

• Test it out with the command fence_vmware(syntax in wiki in ackowledgements section)

Page 34: Luci, ricci and the rac bc

Things to consider with RAC

• Don’t reboot all nodes at the same time –unless you feel like losing your entire database!

• Bear in mind that the /udev rules should map to some raw disks if you’re using ASM.

• SSH key equivalency (sharing keys).

Page 35: Luci, ricci and the rac bc

Acknowledgements

• Thanks to:– [redacted] for getting me going– [redacted] for helping me figure out that nasty little issue

with GSSAPI timing out my SSH connections

• Sources:– http://www.oracle.com/technology/pub/articles/hunter_r

ac10gr2_iscsi.html– http://www.redhat.com/docs/en-

US/Red_Hat_Enterprise_Linux/4.8/html/Cluster_Administration/s1-start-luci-ricci-conga-CA.html

– http://sources.redhat.com/cluster/wiki/VMware_FencingConfig

– http://www.vroem.co.za/?p=7

Page 36: Luci, ricci and the rac bc

Bonus Slides!

Page 37: Luci, ricci and the rac bc

Bonus Slides!

Page 38: Luci, ricci and the rac bc

Bonus Slides!

Page 39: Luci, ricci and the rac bc

Bonus Slides!

Page 40: Luci, ricci and the rac bc

Bonus Slides!