37
© 2000 Copyright Hewlett Packard Co. HA Cluster SuperDome Configurations John Foxcroft, BCC/Availability Clusters Solutions Lab HA Products Support Planning and Training Version 1.0 9/22/00

© 2000 Copyright Hewlett Packard Co. HA Cluster SuperDome Configurations John Foxcroft, BCC/Availability Clusters Solutions Lab HA Products Support Planning

Embed Size (px)

Citation preview

© 2000 Copyright Hewlett Packard Co.

HA Cluster SuperDome Configurations

John Foxcroft, BCC/Availability Clusters Solutions Lab

HA Products Support Planning and Training

Version 1.0 9/22/00

2 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Cluster SuperDome Configurations

• HA Cluster Review– HA Cluster Architectures– Cluster Quorum– Cluster Lock– Power Requirements– Disaster Tolerant Solutions

• Single Cabinet Configuration• Multi Cabinet Configurations• Mixed Server Configurations• Disaster Tolerant Solutions with SuperDome• References• FAQ’s• Lab Exercises

3 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Range of HA Cluster ArchitecturesF

lexi

bil

ity

&F

un

ctio

nal

ity

Distance

Local Cluster

Campus Cluster

Metro Cluster

Continental Clusters

•Single Cluster•Automatic Failover•Same data center

•Single Cluster•Automatic Failover•Same Site

•Single Cluster•Automatic Failover•Same City

•Separate Clusters•“Push-Button” Failover•Between Cities

SuperDome is fully supported across all HA Cluster Architectures !

4 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

• MC/ServiceGuard Features: • Multi OS• One-stop GUI• Rolling upgrade• Tape sharing• 16 nodes• No idle system• Online reconfiguration• Automatic Failback• Rotating standby

• Closely integrated with OS, HP-UX

MC/ServiceGuard

Clients

Application Tier

Database Tier

5 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

1 2 3 4

1 2 3 4

2 3 4

100% quorum to boot unless using the manual override -f of cmruncl4 nodes

3 left out of 4 > 50% quorum(no lock required)

2 left of 3 > 50% quorum

3 4

1 left of 2 = 50% quorumCluster Lock needed to form cluster

Examples of Failures and Dynamic Quorum

6 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

1 2 3 4

1 2 3 4 5

2 left out of 4 = 50% quorumCluster Lock needed to form cluster

3 left out of 5 > 50% quorum(no lock required)

Examples of Failures and Dynamic Quorum

1 2 3 4 5

2 left out of 5 < 50% quorumCluster goes down !

7 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Cluster Lock Disk

• A Cluster Lock Disk is required in a 2 node cluster (recommended for 3,4 nodes) to provide a tie breaker for the cluster after a failure.

• Cluster Lock Disk is supported for up to 4 nodes maximum.• Must be a disk that is connected to all nodes• Is a normal data disk, lock functionality only used after a node

failure

A A

B B

Cluster Lock

8 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

• Care should be taken to make sure a single power supply failure does not take out:– Half the nodes and– Cluster lock disk

Secure Power Supply

A A

A’ A’

Cluster Lock

9 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA SuperDome Configurations Remarks/Assumptions

• Each partition is equivalent to a traditional standalone server running and OS

• Each partition comes equipped with:– core I/O, other I/O and LAN connections

• Each partition connects to:– boot devices, data disks, removable media (DVD-ROM and/or DAT)

• Redundant components exist in each partition as an attempt to remove SPOFs (single-points-of-failure)– redundant I/O interfaces (disk and LAN)– redundant heartbeat LANs– boot devices protected via mirroring (MirrorDisk/UX or RAID)– critical data protected via mirroring (MirrorDisk/UX or RAID)– LAN protection

– auto-port aggregation for Ethernet LANs– MC/SG for Ethernet & FDDI– Hyperfabic and ATM provide their own LAN failover abilities

10 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA SuperDome Configurations Remarks/Assumptions

• Any partition that is protected by MC/SG can be configured in a cluster with: – a standalone system – another partition within the same SuperDome cabinet

(see HA considerations for more details). – another SuperDome

• Any partition that is protected by MC/SG contains as many redundant components as possible to further reduce the chance of failure. For example: – Dual AC power to a cabinet is recommended, if possible– Redundant I/O chassis attached to a different cell is

recommended, if possible

11 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA SuperDome Configurations Cabinet Considerations

• 3 Single Points of Failure (SPOF) have been identified within single cabinet 16-Way and 32-Way systems and dual cabinet 64-Way systems: – system clock, power monitor. system backplane

• To configure an HA cluster with no SPOF, the membership must extend beyond a single cabinet:– must be configured such that the failure of a single cabinet does not result in

the failure of a majority of the nodes in the cluster.– cluster lock device must be powered independently of the cabinets containing

the cluster nodes.

• Some customers want a “cluster in a box” configuration.– MC/ServiceGuard will support this configuration, however it needs to be

recognized that it does contain SPOFs that will bring down the entire cluster.– Mixed OS and ServiceGuard revisions should only exist temporarily while

performing a rolling upgrade within a cluster.

• 64-Way dual cabinet systems connected with flex cables have worse SPOF characteristics than single cabinet 16-Way and 32-Way systems. – There is no HA advantage to configure a cluster within a 64-Way system vs.

across two 16 or 32-Way systems.

• Optional AC input power on a separate circuit is recommended

12 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA SuperDome Configurations I/O Considerations

• Cluster heartbeat will be done over LAN connections between SuperDome partitions.

• Redundant heartbeat paths are required and can be accomplished by using either multiple heartbeat subnets or via standby interface cards.

• Redundant heartbeat paths should be configured in separate I/O modules (I/O card cages) when possible.

• Redundant paths to storage devices used by the cluster are required and can be accomplished using either disk mirroring or via LVM’s pvlinks.

• Redundant storage device paths should be configured in separate I/O modules (I/O card cages) when possible.

13 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA SuperDome Configurations Redundant I/O Paths Example

* Redundant paths are required for shared storage devices in a cluster* MirrorDisk/UX or PV-Links can be configured to provide alternate paths to to disk volumes and protect against I/O card failure (Logical Volume Manager feature)* At least two I/O card cages per partition are recommended to protect against I/O Card Cage failure

I/O Card Cage 1

1 FW SCSI CardCore I/O Card

12 Slots Total

I/O Card Cage 2

1 FW SCSI Card

12 Slots Total

D ISK

D ISK

D ISK

D ISKD ISK

D ISK

D ISK

D ISKP1

P2

F1 F2

Primary Path

Mirror/Alternate Path

1 Copy of HP-UXrunning inthis partition

Partition 1I/O Card Cage 1

1 FW SCSI CardCore I/O Card

12 Slots Total

I/O Card Cage 2

1 FW SCSI Card

12 Slots Total

Partition 2

Cell

Cell

Cell

Cell

1 Copy of HP-UXrunning inthis partition

14 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Single Cabinet Configuration “Cluster in a Box”

Part

itio

n 1

Cell

Part

itio

n 2

Tw

o n

od

e S

erv

iceG

uard

C

lust

er

Notes:•Considered a "Single System" HA solution•SPOFs in the cabinet can cause the entire cluster to fail

(SPOF’s: clock, backplane, power monitor).•A four node (four partition) cluster is supported within a 16-Way system (*).•Up to a eight node (eight partition) cluster is supported within a 32-Way system (*).•Up to a sixteen node (sixteen partition) cluster is supported within a 64-Way system (*)•Cluster lock required for two partition configurations•Cluster lock must be powered independently of the cabinet.•N+1 power supplies required (included in base price of SD)•Dual power connected to independent power circuits required•Root volume mirrors must be on separate power circuits.

One 16W, 32W or 64W System

Cell

Cell

Cell

Cluster Lock

15 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Multi Cabinet Configuration

Part

itio

n 1

Part

itio

n

2

Part

itio

n 1

Part

itio

n

2

Part

itio

n

3 Part

itio

n

3

Notes:•No SPOF configuration.•Cluster lock is required if cluster is wholly contained within two 16-Way or 32-Way systems (due to possible 50% cluster membership failure).•ServiceGuard only supports cluster lock up to four nodes, thus two cabinet solution is limited to four nodes.•Two cabinet configurations, must evenly divide nodes between the cabinets (i.e.. 3 and 1 is not a legal 4 node configuration).•Cluster lock must be powered independently of either cabinet•N+1 power supplies required•Dual power connected to independent power circuits required.•Root volume mirrors must be on separate power circuits

Oth

er

independ

ent

nodes

Two Independent 16-Way or 32-Way Systems

Cell

Cell

Cell

Cell

Cell

Cell

Cell

Cell

Cluster Lock

Tw

o n

ode S

erv

iceG

uard

clu

ster

16 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Part

itio

n 1

Cell Cluster Lock

Cell

Cell

Cell

Part

itio

n 2

Cell

Cell

Cell

Cell

Part

itio

n 2

Part

itio

n 1

HA Multi Cabinet Configuration

Part

itio

n 1

Notes:•No SPOF configuration.•Cluster lock is required if a cluster is wholly contained within two 16-Way or 32-Way systems (due to possible 50% cluster membership failure).•ServiceGuard only supports cluster lock up to four nodes, thus two cabinet solution is limited to four nodes.•Two cabinet configurations, must evenly divide nodes between the cabinets (i.e.. 3 and 1 is not a legal 4 node configuration).•Cluster lock must be powered independently of either cabinet•N+1 power supplies required•Dual power connected to independent power circuits required.•Root volume mirrors must be on separate power circuits

Two Independent 32-Way SystemsTwo 4-node clusters

Cell Cluster Lock

Four

node S

erv

iceG

uard

cl

ust

er

Cell

Cell

Cell

Part

itio

n 2

Cell

Cell

Cell

Cell

Part

itio

n 2

Part

itio

n 1

Four

node S

erv

iceG

uard

cl

ust

er

17 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Multi Cabinet Configuration64-Way System

64-Way System (dual cabinet)

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

Part

itio

n

1

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPU

8 GB RAM

4 CPUs8 GB RAM

Part

itio

n 2

4 CPU8 GB RAM

4 CPUs8 GB RAM

4 CPU

8 GB RAM

4 CPUs8 GB RAM

Part

itio

n 3

4 CPU8 GB RAM

4 CPUs8 GB RAM

64-Way System (dual cabinet)

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

Part

itio

n

1

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPUs8 GB RAM

4 CPU

8 GB RAM

4 CPUs8 GB RAM

Part

itio

n 2

4 CPU8 GB RAM

4 CPUs8 GB RAM

4 CPU

8 GB RAM

4 CPUs8 GB RAM

Part

itio

n 3

4 CPU8 GB RAM

4 CPUs8 GB RAM

Cluster Lock

Two node ServiceGuard cluster

18 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Mixed Configurations

Part

itio

n 1

Notes:•Cluster configuration can contain a mixture of SuperDome and non-SuperDome nodes.•Care must be taken to maintain an even or greater number of nodes outside of the SuperDome cabinet. •Using an even number of nodes within and outside of the Superdome requires a cluster lock (maximum cluster size of four nodes).•Cluster lock is not supported for clusters with greater than four nodes.•ServiceGuard supports up to 16 nodes•A cluster size of greater than four nodes requires more nodes to be outside the Superdome. •Without a cluster lock, beware of configurations where the failure of a SuperDome cabinet will cause the remain nodes to be 50% or less quorum - the cluster will fail !

16-Way, 32-Way or 64-Way System and other HP9000 servers

Cell

Cell

N-Class

N-ClassCluster Lock

Four node ServiceGuard clusterPart

itio

n 2

Cell

Cell

19 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Mixed Configurations

Part

itio

n 1

Notes:•Cluster configuration can contain a mixture of SuperDome and non-SuperDome nodes.•Care must be taken to maintain an even or greater number of nodes outside of the SuperDome cabinet. •Using an even number of nodes within and outside of the Superdome requires a cluster lock (maximum cluster size of four nodes).•Cluster lock is not supported for clusters with greater than four nodes.•ServiceGuard supports up to 16 nodes•A cluster size of greater than four nodes requires more nodes to be outside the Superdome. •Without a cluster lock, beware of configurations where the failure of a SuperDome cabinet will cause the remain nodes to be 50% or less quorum - the cluster will fail !

16-Way, 32-Way or 64-Way System and other HP9000 servers

Cell

Cell

N-Class

N-Class

Five node ServiceGuard clusterPart

itio

n 2

Cell

CellN-Class

No Cluster Lock

20 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Mixed ConfigurationsUsing a low end system as an arbitrator

Part

itio

n 1

Notes:•A cluster size of greater than four nodes requires more nodes to be outside the Superdome. •One option is to configure a low end system to act only as an arbitrator (providing >50% quorum outside the SuperDome).•Requires redundant heartbeat LANs.•System on separate power circuit.•The SMS (Support Management Station) A-class system could be used for this purpose.

•A180 •A400, A500 •External LAN connections only(Built-in 100/BT card not supported with ServiceGuard)

16-Way, 32-Way or 64-Way System and other HP9000 servers

Cell

Cell

N-Class

N-Class

Five node ServiceGuard clusterPart

itio

n 2

Cell

Cell

A-Class

No Cluster Lock

21 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HA Mixed Configurations

Part

itio

n 1

Notes:•Cluster configuration can contain a mixture of SuperDome and non-SuperDome nodes.•Care must be taken to maintain an even or greater number of nodes outside of the SuperDome cabinet. •Using an even number of nodes within and outside of the Superdome requires a cluster lock (maximum cluster size of four nodes).•Cluster lock is not supported for clusters with greater than four nodes.•ServiceGuard supports up to 16 nodes•A cluster size of greater than four nodes requires more nodes to be outside the Superdome. •Without a cluster lock, beware of configurations where the failure of a SuperDome cabinet will cause the remain nodes to be 50% or less quorum - the cluster will fail !

16-Way, 32-Way or 64-Way System and other HP9000 servers

Cell

Cell

N-Class

N-Class

Five node ServiceGuard clusterPart

itio

n 2

Cell

CellN-Class

No Cluster Lock

N-Class down for maintenance,SPOF (SD) causes 50% quorum,Cluster fails !

22 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Frequently Asked Questions (FAQ's)Question: Can I configure a ServiceGuard cluster within a single SuperDome cabinet ?

Answer: Yes, it is supported to configure a cluster within a single cabinet (16W, 32W or 64W). Recognize that this configuration contain SPOF’s that can bring down the entire cluster.

Question: In a two cabinet configuration (using 16W, 32W or 64W systems), can I configure 1 node in one cabinet and 3 nodes in the other ?

Answer: No, there are only two valid ways to create a cluster between two SuperDome systems; a 2 node cluster (1 node in one cabinet, 1 node in the other), or a 4 node cluster (2 nodes in one cabinet, 2 nodes in the other).

Question: Is a lock disk required for a 4 node (two cabinet) configuration ?

Answer: Yes, since a single failure can take down exactly half of the cluster nodes.

Question: Are dual power cables recommended in each cabinet ?

Answer: Yes, this optional feature should be ordered in HA configurations

Question: Can a cluster be four 32W systems each with one partition of 8 cells wide ?

Answer: Yes, single partition SuperDome systems (and non-SuperDome nodes) could be configured in up to a 16 node cluster.

Question: Are SuperDomes supported in Campus/Metro Cluster and ContinentalCluster configurations ?

Answer: Yes, subject to the rules covered in this presentation.

Question: Is heartbeat handled any differently between partitions within SuperDome boxes ?

Answer: Heartbeat is done over LAN connections between partitions. From the ServiceGuard perspective, each partition is just another HP-UX node.

23 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

References

ACSL Product Support information (patches, PSP, etc.)see http://haweb.cup.hp.com/Support, or Kmine

MC/ServiceGuard Users Manual Designing Disaster Tolerant HA Clusters Users Manual

see http://docs.hp.com/hpux/ha

XP256 Documentation see http://docs.hp.com/hpux/systems/#massstorage

HPWorld ‘99 Tutorial: “Disaster-Tolerant, Highly Available Cluster Architectures”see http://docs.hp.com/hpux/ha or

http://haweb.cup.hp.com/ATC/WP

24 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Additional Refresher Slides

25 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

• MC/ServiceGuard Features: • Multi OS• One-stop GUI• Rolling upgrade• Tape sharing• 16 nodes• No idle system• Online reconfiguration• Automatic Failback• Rotating standby

• Closely integrated with OS, HP-UX

MC/ServiceGuard

Clients

Application Tier

Database Tier

26 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

• ServiceGuard OPS Edition Features:

• Same protection functionality for applications as MC/SG

• Additional protection for Oracle database

• Parallel database environment for increased availability and scalability

ServiceGuard OPS Edition

End-User Clients

27 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

ServiceGuard Comparative Features

Cluster Topology Single Cluster up to 16 nodes (MC/ServiceGuard)Single Cluster up to 8 nodes (ServiceGuard OPS Edition)

Geography Data Center

Network Subnets Single IP Subnet

Network Types Dedicated Ethernet, FDDI or Token Ring

Cluster Lock Disk Required for 2 nodes, optional for 3-4 nodes, not used with largerclusters

Failover Type Automatic

Failover Direction Omni-directional

Data Replication NONE

28 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Campus Cluster Solution =MC/SG + Fibre Channel

FC Hub

FC Hub

FC Hub

FC Hub

~10km

Heartbeat

Fibre Channel

29 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Campus Cluster Comparative Features

Cluster Topology Single Cluster up to 4 nodes across 2 data centers orup to 16 nodes across 3 data centers

Geography Campus, up to 10 km (Fibre Channel limitations)

Network Subnets Single IP Subnet

Network Types Dedicated Ethernet, FDDI or Token Ring

Cluster Lock Disk Required for 2 nodes, optional for 3-4 nodes, notused with larger clusters

Failover Type Automatic

Failover Direction Bi-directional

Data Replication MirrorDisk/UX

30 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

MetroCluster with Continuous Access XP

HP Continuous Access XP

ManhattanManhattan New Jersey

HP SureStore E Disk Arrays

HP 9000 Systems

Delivering city-wide automated fail-over

Protect against Tornadoes, Fires, Floods

Rapid, automatic site recovery without human intervention

Effective between systems that are up to 43km apart

Provides very high cluster performance

Backed by collaborative implementation, training and support services from HP

Also available: MetroCluster with EMC SRDF, using EMC Symmetrix Disk Arrays

31 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

MetroCluster Comparative Features

Cluster Topology Single Cluster up to 16 nodes spread across 3 datacenters

Geography Campus or Metropolitan area

Network Subnets Single IP Subnet

Network Types Dedicated Ethernet, or FDDI

Cluster Lock Disk Not Used; 1-2 Arbitrators in third data center act astie breaker

Failover Type Automatic

Failover Direction Bi-directional

Data Replication Physical, in hardware (XP256 CA or EMC SRDF)

32 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

HP ContinentalClusters

Highest levels of availability & disaster tolerance

Reduces downtime from days to minutes Locate data centers at economically and/or

strategically best locations Transparent to applications and data

DataReplication

ClusterDetection

Push button failover across 1000s of km Supports numerous wide area data replication tools for

complete data protection Comprehensive Support and Consulting Services as

well as Business Recovery Services for planning, design, support, and rehearsal

Requires CSS support or greater

33 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

ContinentalClusters: Comparative Features

Cluster Topology Two Clusters, each up to 16 nodes

Geography Continental or Inter-continental

Network Subnets Dual IP Subnets

Network Types Dedicated Ethernet or FDDI within each data center,Wide Area Network (WAN) between data centers

Cluster Lock Disk Required for 2 nodes, optional for 3-4 nodes, notused with larger clusters

Failover Type Semi-Automatic

Failover Direction Uni-directional

Data Replication Physical, in hardware (XP256 CA or EMC SRDF)Logical in software (Oracle Standby Database, etc.)

34 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Two Data Center Campus ClusterArchitecture (# 1)

Highly Available Network

Example: 4-node campus cluster using 16-way, 32-way or 64-way systems and fibre channel for disk connectivity (500 meters point-to-point, 10 kilometers using long wave ports with FCAL hubs)

Recommend multi cabinet SuperDome configurations at each data center for increased availability Each data center must contain the same number of nodes (partitions) Use of MirrorDisk/UX is required to mirror data between the data centers All systems are connected to both mirror copies of data for packages they can run All systems must be connected to the redundant heartbeat network links MUST have dual cluster lock disks, with all systems connected to both of them MAXIMUM cluster size is currently 4 nodes when using cluster lock disks

Data Center A

NW

NW

A A

B B

CL 1

Data Center B

NW

NW

A' A'

B' B'

CL 2

Physical Data Replicationusing MirrorDisk/UX

Physical Data Replicationusing MirrorDisk/UX

Part

itio

n 1

Part

itio

n

2Cell

Cell

Cell

SuperDome SuperDome

Cell

Part

itio

n 1

Part

itio

n

2

Cell

Cell

Cell

Cell

35 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Three Data Center Campus Architecture (#2)

Data Center A

NW

NW

A A

B B

Data Center B

NW

NW

A' A'

B' B'

Physical Data Replicationusing MirrorDisk/UX

SuperDome SuperDome

Data Center C

NWNW

1 or 2 Arbitrator System(s)

Physical Data Replicationwith MirrorDisk/UX

Maximum cluster size - 16 nodes with HP-UX 11.0 and later 11.x versions

Recommend multi cabinet SuperDome configurations at each data center for increased availability

Same number of nodes in each non-Arbitrator data center to maintain quorum in case an entire data center fails

Arbitrators need not beconnected to the replicated data

No Cluster Lock Disk(s) All non-Arbitrator systems must

be connected to both replicacopies of the data

All systems must be connectedto the redundant heartbeatnetwork links

Highly Available NetworkPart

itio

n 1

Part

itio

n

2Cell

Cell

Cell

Cell

Part

itio

n 1

Part

itio

n

2

Cell

Cell

Cell

Cell

36 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Three Data Center MetroCluster Architecture

Data Center A

NW

NW

Data Center B

NW

NW

SuperDome SuperDome

Data Center C

NWNW

1 or 2 Arbitrator System(s)

Highly Available Network

Physical Data Replicationwith EMC SRDF or XP256 CA

Physical Data Replicationwith EMC SRDF or XP256 CA

Arbitrators need not beconnected to the replicated data

No Cluster Lock Disk(s) Systems are not connected

to both replica copies of the data(cannot have two distinct devicesaccessible with the same VGID)

All systems must be connectedto the redundant heartbeatnetwork links

Maximum cluster size - 16 nodes with HP-UX 11.0 and later 11.x versions

Recommend multi cabinet SuperDome configuration at each data center for increased availability

Same number of nodes in each non-Arbitrator data center to maintain quorum in case an entire data center fails

Part

itio

n 1

Part

itio

n

2Cell

Cell

Cell

Cell

Part

itio

n 1

Part

itio

n

2

Cell

Cell

Cell

Cell

37 © 2000 Hewlett-Packard Co.

SUPERDOME HA Clusters

Two Data Center ContinentalClusters Architecture

Systems are not connected to both replica copies of the data (hosts in each cluster are connected to only one copy of the data)

Each cluster must separately conform to heartbeat network requirements

Each cluster must separately conform to quorum rules (cluster lock disks or Arbitrators)

Primary Cluster Recovery ClusterHighly Available Wide Area

Network (WAN)

Physical or Logical Data Replication

Physical or Logical Data Replication

NW

NWNW

NW

Recommend multi cabinet SuperDome configuration at each data center for increased availability

Use of cluster lock disks requires three power circuits in each cluster

HA WAN is used for both data replication and inter-cluster monitoring

Data Center A Data Center B

SuperDomePart

itio

n 1

Part

itio

n

1

Cell

Cell

Cell

Cell

SuperDome

SuperDome

Part

itio

n 1

Part

itio

n

1

Cell

Cell

Cell

Cell

SuperDome