92
IBM PowerHA SystemMirror for Linux Version 7.2.2 IBM

PowerHA SystemMirror for Linux Version 7.2 - ibm.com · command lssrc -ls IBM.RecoveryRM |grep "Operational Quorum State" . ... Compare PowerHA SystemMirror for AIX to PowerHA SystemMirror

Embed Size (px)

Citation preview

IBM PowerHA SystemMirror for Linux

Version 7.2.2

IBM

IBM PowerHA SystemMirror for Linux

Version 7.2.2

IBM

NoteBefore using this information and the product it supports, read the information in “Notices” on page 77.

This edition applies to IBM PowerHA SystemMirror 7.2.2 for Linux and to all subsequent releases and modificationsuntil otherwise indicated in new editions.

© Copyright IBM Corporation 2017, 2018.US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contractwith IBM Corp.

Contents

About this document . . . . . . . . . vHighlighting . . . . . . . . . . . . . . vCase-sensitivity in Linux . . . . . . . . . . vISO 9000. . . . . . . . . . . . . . . . v

PowerHA SystemMirror for Linuxconcepts . . . . . . . . . . . . . . 1Compare PowerHA SystemMirror for AIX toPowerHA SystemMirror for Linux . . . . . . . 1High availability clustering for Linux . . . . . . 2

High availability and hardware availability . . . 2Benefits of PowerHA SystemMirror . . . . . . 3High availability clusters . . . . . . . . . 3

Physical components of a PowerHA SystemMirrorcluster . . . . . . . . . . . . . . . . 3

PowerHA SystemMirror nodes . . . . . . . 4Networks . . . . . . . . . . . . . . 4Clients . . . . . . . . . . . . . . . 5

Managing users and user groups . . . . . . . 5PowerHA SystemMirror cluster nodes, networks, andheartbeating concepts . . . . . . . . . . . 7

Nodes . . . . . . . . . . . . . . . 7Cluster networks . . . . . . . . . . . . 7Configuring the netmon.cf file . . . . . . . 9IP address takeover. . . . . . . . . . . 10IP address takeover by using IP aliases . . . . 10Heartbeating over TCP/IP and disk . . . . . 10Split policy . . . . . . . . . . . . . 12PowerHA SystemMirror resources and resourcegroups . . . . . . . . . . . . . . . 15

PowerHA SystemMirror cluster configurations . . 18Standby configurations . . . . . . . . . 18Takeover configurations . . . . . . . . . 20

Planning for PowerHA SystemMirrorfor Linux . . . . . . . . . . . . . . 25

Installing PowerHA SystemMirror forLinux . . . . . . . . . . . . . . . 29Planning the installation of PowerHA SystemMirrorfor Linux . . . . . . . . . . . . . . . 29Installing PowerHA SystemMirror for Linux . . . 30Cluster snapshot for PowerHA SystemMirror forLinux . . . . . . . . . . . . . . . . 31

Configuring PowerHA SystemMirror forLinux . . . . . . . . . . . . . . . 33Creating a cluster for PowerHA SystemMirror forLinux . . . . . . . . . . . . . . . . 33Adding a node to a cluster for PowerHASystemMirror for Linux . . . . . . . . . . 34Configuring resources for PowerHA SystemMirrorfor Linux . . . . . . . . . . . . . . . 34

Configuring resource groups for PowerHASystemMirror for Linux . . . . . . . . . . 37Configuring a Shared File System resource . . . . 38Verifying the standard configuration . . . . . . 39

Configuring dependencies betweenresource groups . . . . . . . . . . 41

Troubleshooting PowerHASystemMirror for Linux. . . . . . . . 47Troubleshooting PowerHA SystemMirror clusters. . 47Using PowerHA SystemMirror cluster log files . . 47Using the Linux log collection utility . . . . . . 47Solving common problems . . . . . . . . . 48

PowerHA SystemMirror startup issues . . . . 48PowerHA SystemMirror disk issues . . . . . 50PowerHA SystemMirror resource and resourcegroup issues . . . . . . . . . . . . . 51PowerHA SystemMirror Fallover issues . . . . 52PowerHA SystemMirror additional issues . . . 53

PowerHA SystemMirror graphical userinterface (GUI) . . . . . . . . . . . 55Planning for PowerHA SystemMirror GUI . . . . 55Installing PowerHA SystemMirror GUI . . . . . 57Logging in to the PowerHA SystemMirror GUI . . 58Navigating the PowerHA SystemMirror GUI . . . 58Troubleshooting PowerHA SystemMirror GUI . . . 60

Smart Assist for PowerHASystemMirror . . . . . . . . . . . . 63PowerHA SystemMirror for SAP HANA . . . . 63

Planning for SAP HANA . . . . . . . . . 63Configuring the Smart Assist application for SAPHANA . . . . . . . . . . . . . . . 64

PowerHA SystemMirror for SAP NetWeaver . . . 67Planning for SAP NetWeaver . . . . . . . 67Configuring SAP NetWeaver . . . . . . . 68

Troubleshooting PowerHA SystemMirror SmartAssist issues . . . . . . . . . . . . . . 75

PowerHA SystemMirror is not able to harvestsome values during Wizard execution . . . . 75Replication mode that is configured in SmartAssist wizard is different from replication modeshown in SAP HANA setup . . . . . . . . 76Smart Assist policy fails to activate . . . . . 76Smart Wizard does not detect or show one ormore Ethernet interfaces in the list. . . . . . 76

Notices . . . . . . . . . . . . . . 77Privacy policy considerations . . . . . . . . 79Trademarks . . . . . . . . . . . . . . 79

© Copyright IBM Corp. 2017, 2018 iii

||

||||

Index . . . . . . . . . . . . . . . 81

iv PowerHA SystemMirror for Linux Version 7.2.2

About this document

This document provides information about how you can configure, maintain, and monitor clusters withPowerHA® SystemMirror® for Linux.

HighlightingThe following highlighting conventions are used in this document:

Bold Identifies commands, subroutines, keywords, files, structures, directories, and other items whose names arepredefined by the system. Bold highlighting also identifies graphical objects, such as buttons, labels, andicons that the you select.

Italics Identifies parameters for actual names or values that you supply.

Monospace Identifies examples of specific data values, examples of text similar to what you might see displayed,examples of portions of program code similar to what you might write as a programmer, messages fromthe system, or text that you must type.

Case-sensitivity in LinuxEverything in the Linux operating system is case-sensitive, which means that it distinguishes betweenuppercase and lowercase letters. For example, you can use the ls command to list files. If you type LS, thesystem responds that the command is not found. Likewise, FILEA, FiLea, and filea are three distinct filenames, even if they reside in the same directory. To avoid causing undesirable actions to be performed,always ensure that you use the correct case.

ISO 9000ISO 9000 registered quality systems were used in the development and manufacturing of this product.

© Copyright IBM Corp. 2017, 2018 v

vi PowerHA SystemMirror for Linux Version 7.2.2

PowerHA SystemMirror for Linux concepts

The following information introduces important concepts you must understand before you can use thePowerHA SystemMirror for Linux.

Compare PowerHA SystemMirror for AIX to PowerHA SystemMirror forLinuxThe following table compares different functions that are shared between PowerHA SystemMirror forAIX® and PowerHA SystemMirror for Linux.

Table 1. Compare PowerHA SystemMirror for AIX to PowerHA SystemMirror for Linux

Functions PowerHA SystemMirror for AIX PowerHA SystemMirror for Linux

Configuration Any configuration change that is done bythe user is saved only in the localConfiguration Database (ODM) and isapplied to the different cluster nodes aftersynchronization procedure.

If any configuration changes are done by the user, noseparate synchronization procedure is needed and theconfiguration change is applied immediately to allcluster nodes.

Split policy (manual) For the 2-node cluster in PowerHASystemMirror for AIX, if one of the nodesgoes down, the other node takes over theresources of the resource group even whensplit policy is set to the Manual state.

In PowerHA SystemMirror for Linux, when splitpolicy is set to the Manual state for the 2-node cluster,and if one node goes down, then manual interventionis needed by using the runact command so that theresources of the resource group are acquired by theother node.

Split policy (manual) When the split operation happens, thePowerHA SystemMirror for AIX softwarebroadcasts a message to all terminals thatindicates split operation and starts therelevant script to continue or to get rebootedas a recovery action.

When the split operation happens, the user needs toverify the split operation by using the followingcommand lssrc -ls IBM.RecoveryRM |grep"Operational Quorum State". The valuePENDING_QUORUM indicates split operation and theuser needs to run the runact command with therelevant value to continue or to get rebooted as arecovery action.

Split handling (tiebreaker) The Tiebreaker policy can be used with anynumber of cluster nodes.

PowerHA SystemMirror for Linux uses tiebreaker onlywhen there is a tie and the total number of nodes thatare configured in a cluster is even.

Split handling (tiebreaker) When a tiebreaker is configured and morethan half of the nodes are in the shutdownstate (for example, two nodes are down in a3-node cluster), then also PowerHASystemMirror for AIX can start the resourceson a node that is up and running.

PowerHA SystemMirror for Linux uses the Normalquorum type of Reliable Scalable Cluster Technology(RSCT) to create a peer domain cluster. This operationrequires that at least half-of-the nodes must be up. Soif two nodes are down in a 3-node cluster, then even ifa tiebreaker is configured, the node that is up wouldnot be able to acquire the resources.

Stop behavior ofStartAfter dependency

The StartAfter dependency does not impactthe stop behavior of the resource groups.

If two resource groups have StartAfter dependency,the target resource group cannot be stopped while thesource resource group is online.

Dependency(anti-collocated)

The resource groups with anti-collocateddependency are never online on the samenode.

If the target resource group with anti-collocateddependency is brought online when the sourceresource group is already online on a node, and theintended node of the target resource group is notavailable, the resource group becomes online on thesame node where source resource group is also active.

Dependency (collocated) For two resource groups with collocateddependency, failure of any one of theresource groups triggers movement of boththe resource groups to another node.

For two resource groups with collocated dependency,if any one of the resource groups fails, then thePowerHA SystemMirror software does not move boththe resource groups to another node.

© Copyright IBM Corp. 2017, 2018 1

Table 1. Compare PowerHA SystemMirror for AIX to PowerHA SystemMirror for Linux (continued)

Functions PowerHA SystemMirror for AIX PowerHA SystemMirror for Linux

unmanage parameter(cluster or node-levelparameter)

The unmanage parameter is available in thePowerHA SystemMirror for AIX. Thisparameter allows the workload to runwithout being managed by PowerHASystemMirror.

This parameter is not available in the PowerHASystemMirror Version 7.2.2 for Linux.

START_ON_BOOTparameter (node-levelparameter)

If a node automatically joins the cluster afterthe reboot operation, the START_ON_BOOTparameter controls that node.

The START_ON_BOOT parameter is not available inthe PowerHA SystemMirror Version 7.2.2 for Linux.

Application configuration The application controller and theapplication monitor are separately createdby using the clmgr function.

The application controller and the application monitorare separately created by using the clmgr function.

High availability clustering for LinuxThe IBM® PowerHA SystemMirror software provides a low-cost commercial computing environment thatensures quick recovery of mission-critical applications from hardware and software failures.

PowerHA SystemMirror monitors the cluster resources for failures, and when a problem is detected,PowerHA SystemMirror moves the application (along with resources that ensure access to theapplication) to another node in the cluster.

High availability and hardware availabilityHigh availability software is sometimes confused with simple hardware availability. Fault tolerant,redundant systems (such as RAID) and dynamic switching technologies (such as DLPAR) providerecovery of certain hardware failures, but do not provide the full scope of error detection and recoveryrequired to keep a complex application highly available.

A modern, complex application requires access to all of these components:v Nodes (CPU, memory)v Network interfaces (including external devices in the network topology)v Disk or storage devicesv Application software

Surveys of the causes of downtime show that actual hardware failures account for only a smallpercentage of unplanned outages. Other contributing factors include:v Operator errorsv Environmental problemsv Application and operating system errors.

Reliable and recoverable hardware simply cannot protect against failures of all these different aspects ofthe configuration. Keeping these varied elements, and therefore the application, highly available requires:v Thorough and complete planning of the physical and logical procedures for access and operation of the

resources on which the application depends. These procedures help to avoid failures in the first place.v A monitoring and recovery package that automates the detection and recovery from errors.v A well-controlled process for maintaining the hardware and software aspects of the cluster

configuration while keeping the application available.

2 PowerHA SystemMirror for Linux Version 7.2.2

Benefits of PowerHA SystemMirrorPowerHA SystemMirror has many benefits.

PowerHA SystemMirror helps you with the following:v The PowerHA SystemMirror planning process and documentation include tips and advice on the best

practices for installing and maintaining a highly available PowerHA SystemMirror cluster.v Once the cluster is operational, PowerHA SystemMirror provides the automated monitoring and

recovery for all the resources on which the application depends.v PowerHA SystemMirror provides a full set of tools for maintaining the cluster while keeping the

application available to clients.

PowerHA SystemMirror allows you to:v Quickly and easily setup a cluster by using the clmgr command-line interface or the application

configuration assistants (Smart Assists).v Ensure high availability of applications by eliminating single points of failure in a PowerHA

SystemMirror environment.v Manage how a cluster handles component failures.v Secure cluster communications.v Monitor PowerHA SystemMirror components and diagnose problems that might occur.

High availability clustersAn application cluster is a group of loosely coupled machines networked together, sharing disk networkand other resources.

In a high availability cluster, multiple server machines cooperate to provide a set of services or resourcesto clients.

Physical components of a PowerHA SystemMirror clusterPowerHA SystemMirror provides a highly available environment by identifying a set of resourcesessential to uninterrupted processing of an application. It also defines a protocol that nodes use tocollaborate to ensure that these resources are available.

PowerHA SystemMirror extends the clustering model by defining relationships among cooperatingprocessors where one processor provides the service offered by a peer should the peer be unable to do so.As shown in the following figure, a PowerHA SystemMirror cluster is made up of the following physicalcomponents:v Nodesv Networksv Clients.

The PowerHA SystemMirror software allows you to combine physical components into a wide range ofcluster configurations, providing you with flexibility in building a cluster that meets your processing andavailability requirements. This figure shows one example of a PowerHA SystemMirror cluster. OtherPowerHA SystemMirror clusters could look very different - depending on the number of processors, thechoice of networking and disk technologies, and so on.

Concepts 3

PowerHA SystemMirror nodesNodes form the core of a PowerHA SystemMirror cluster. A node is a processor that runs Linux, thePowerHA SystemMirror software, and the application software.

In a PowerHA SystemMirror cluster, each node is identified by a unique name. A node has access to a setof resources: networks, network addresses, and applications. Typically, a node runs a server or a back endapplication that accesses data on the shared external disks.

NetworksAs an independent, layered component of the Linux operating system, the PowerHA SystemMirrorsoftware is designed to work with any TCP/IP-based network.

Nodes in a cluster use the network to:v Allow clients to access the cluster nodesv Enable cluster nodes to exchange heartbeat messages

Types of networks

The PowerHA SystemMirror software defines two types of communication networks, characterized bywhether these networks use communication interfaces based on the TCP/IP subsystem (TCP/IP-based) ordisk-based networks.

Clients

c_4nodebig-00

Disk buses

PublicLAN2

PublicLAN1

Nodes

Shared disk with mirrors

Private LAN

Figure 1. Example of a PowerHA SystemMirror cluster

4 PowerHA SystemMirror for Linux Version 7.2.2

TCP/IP-based networkConnects two or more server nodes, and optionally allows client access to these cluster nodes,using the TCP/IP protocol.PowerHA SystemMirror uses only unicast communication forheartbeat.

Disk heartbeatProvides communication between PowerHA SystemMirror cluster nodes to monitor the health ofthe nodes, networks and network interfaces, and to prevent cluster partitioning

ClientsA client is a processor that can access the nodes in a cluster over a local area network.

Clients each run a "front end" or client application that queries the server application running on thecluster node. The PowerHA SystemMirror software provides a highly available environment for criticaldata and applications on the cluster nodes. The PowerHA SystemMirror software does not make theclients themselves highly available.

Managing users and user groupsThe clmgr command manages user accounts and user groups on all nodes in a cluster by makingconfiguration changes on any node in a cluster. The clmgr command also supports Lightweight DirectoryAccess Protocol (LDAP) for users and user groups management.

PowerHA SystemMirror manages Linux and LDAP users and user groups across all PowerHASystemMirror clusters. PowerHA SystemMirror user groups provide an additional level of security andenable the system administrators to change the configuration settings of a group of users as a singleentity.

Managing Linux and LDAP user accounts in a PowerHA SystemMirror cluster

You can use the clmgr command to manage users and user groups from any node in a cluster. If youcreate a user that already exist on any node in the cluster, the operation might fail.

The following Linux files, which store user account information, must be consistent across all clusternodes.v The /etc/passwd system filev Other system files in the /etc/security directory

Note: If a cluster node fails, users can log on to the operating nodes without experiencing any issues thatare caused by mismatched user or group IDs.

As the system administrator of a PowerHA SystemMirror, you can use the clmgr command to manageuser and user groups from any node in a cluster. The clmgr command propagates new and updatedinformation to all of the other nodes in the cluster.

To configure user accounts on an LDAP server, you must have an installation of the following software:v openLDAP Serverv openLDAP Client

To add a user on the local system, enter the following command:clmgr add user Local_User

where,

Concepts 5

|

|||

||||

|

||

||

|

|

||

|||

|

|

|

|

|

|

v The Local_User parameter is the user name of the user account that must be added on the local systemand on the other nodes of the PowerHA SystemMirror cluster.

Note: If a node in a cluster is in an Offline state, you cannot create a user on that node.

To add an LDAP user on the local system, enter the following command:clmgr add user LDAP_User registry=ldap DN=<distinguished_Name>

where,v The DN parameter is the distinguished name, which is created when you set up the openLDAP server.

The clmgr command prompts for an LDAP password, which is the LDAP administrator password usedwhen you set up the openLDAP server.

The clmgr add user command includes the following attributes for the Local user or an LDAP user:

Table 2. Local user and LDAP user attributes.

Field name Description

user_name A login name that identifies this user account on the system.

ID Defines a unique decimal integer string to associate with thisuser account on the system.

ADMINSTRATIVE Select True if this user will be an administrative user. Otherwise,select False.

CHANGE_ON_NEXT_LOGIN Forces user to change the password on next login.

PRIMARY Specifies the group name of the user to which the user belongswhen the user logs in for the first time.

GROUPS Specifies name of all user groups to which a user belongs. Auser can access authority to the protected resources.

ROLES Creates and manages role-based access control (RBAC) roles forusers and user groups. You can use these roles to control whichcommands can be executed by different sets of users ofPowerHA SystemMirror.

REGISTRY Indicates where the information about the new user must bestored. If the user is being defined locally and within the Linuxenvironment, the value LOCAL must be specified. If the user isdefined on a remote server, such as the LDAP server, the valueLDAP must be used.

DN A DN is a sequence of relative distinguished names (RDN)separated by commas. The DN attribute is relevant when theregistry attribute is set to LDAP.

HOME Specifies the home directory of a user.

SHELL Specifies the default shell for user.

EXPIRATION Specifies the expiration date of the password for user.

LOCKED Locks the password of the specified account.

DAYS_TO_WARN Defines the number of days before the system issues a warningthat a password change is required.

LOCKOUT_DELAY Specifies the time period, in weeks, between passwordexpiration and lockout.

MAX_PASSWORD_AGE Defines the maximum age (in weeks) for the user password.

MIN_PASSWORD_AGE Defines the minimum age (in weeks) for the user password.

Managing user group accounts in a PowerHA SystemMirror cluster

Similar to users, the user groups can be configured on PowerHA SystemMirror Linux setup on the localsystem (files) and on an LDAP server.

6 PowerHA SystemMirror for Linux Version 7.2.2

||

|

|

|

|

|||

|

||

||

||

|||

|||

||

|||

|||

|||||

||||||

||||

||

||

||

||

|||

|||

||

|||

|

||

To configure user accounts on an LDAP server, you must have an installation of the following software:v openLDAP Serverv openLDAP Client

To add a group on the local system, enter the following command:clmgr add group User_group

The clmgr` command adds a group on the local system and on the other nodes of the cluster.

Note: The user groups are not created on the nodes, which are in an Offline state in the cluster.

To add a group in LDAP, enter the following command:clmgr add group User_group registry=ldap DN=<distinguished_name>

The following attributes can be used to add a group:

Table 3. Group attributes.

Field name Description

ID Defines a unique decimal integer string to associate with theuser account on the system.

ADMINSTRATIVE Select True if this user will be an administrative user. Otherwiseselect False.

USERS Specifies the names of the users that belong to user group. Themembers of a group can access (that is, read, write, or run) aresource or file that is owned by another member of the groupas specified by the access control list of the resource.

REGISTRY Indicates where the information about the new user must bestored. If the user is being defined locally and within the Linuxenvironment, the value LOCAL must be specified. If the user isdefined on a remote server, such as the LDAP server, the valueLDAP must be used.

DN A DN is a sequence of relative distinguished names (RDN)separated by commas. The DN attribute is relevant when theregistry attribute is set to LDAP.

PowerHA SystemMirror cluster nodes, networks, and heartbeatingconceptsThis section introduces major cluster topology-related concepts and definitions that are used throughoutthe documentation and in the PowerHA SystemMirror user interface.

NodesA node is a processor that runs both Linux and the PowerHA SystemMirror software.

Nodes might share a set of resources such as, networks, network IP addresses, and applications. ThePowerHA SystemMirror software supports up to 4 nodes in a cluster. In a PowerHA SystemMirrorcluster, each node is identified by a unique name. In PowerHA SystemMirror, a node name and ahostname must be the same. Nodes serve as core physical components of a PowerHA SystemMirrorcluster.

Cluster networksCluster nodes communicate with each other over communication networks.

Concepts 7

|

|

|

|

|

|

|

|

|

|

||

||

|||

|||

|||||

||||||

|||||

|

If one of the physical network interface cards on a node on a network fails, PowerHA SystemMirrorpreserves the communication to the node by transferring the traffic to another physical network interfacecard on the same node. If a "connection" to the node fails, PowerHA SystemMirror transfers resources toanother node to which it has access.

In addition, the clustering software sends heartbeats between the nodes over the cluster networks toperiodically check on the health of the cluster nodes themselves. If the clustering software detects noheartbeats from a node, a node is considered as failed and resources are automatically transferred toanother node.

It is highly recommend configuring multiple communication paths between the nodes in the cluster.Having multiple communication networks prevents cluster partitioning, in which the nodes within eachpartition form their own entity. In a partitioned cluster, it is possible that nodes in each partition couldallow simultaneous non-synchronized access to the same data. This can potentially lead to different viewsof data from different nodes.

Physical and logical networksA physical network connects two or more physical network interfaces.

Note: If you are considering a cluster where the physical networks use external networking devices toroute packets from one network to another, consider the following: When you configure a PowerHASystemMirror cluster, PowerHA SystemMirror verifies the connectivity and access to all interfaces definedon a particular physical network. However, PowerHA SystemMirror cannot determine the presence ofexternal network devices such as bridges and routers in the network path between cluster nodes. If thenetworks have external networking devices, ensure that you are using devices that are highly availableand redundant so that they do not create a single point of failure in the PowerHA SystemMirror cluster.

A logical network is a portion of a physical network that connects two or more logical network interfacesor devices. A logical network interface or device is the software entity that is known by an operatingsystem. There is a one-to-one mapping between a physical network interface/device and a logicalnetwork interface/device. Each logical network interface can exchange packets with each logical networkinterface on the same logical network.

If a subset of logical network interfaces on the logical network needs to communicate with each other(but with no one else) while sharing the same physical network, subnets are used. A subnet mask definesthe part of the IP address that determines whether one logical network interface can send packets toanother logical network interface on the same logical network.

Logical networks in PowerHA SystemMirror:

PowerHA SystemMirror has its own, similar concept of a logical network.

All logical network interfaces in a PowerHA SystemMirror network can communicate PowerHASystemMirror packets with each other directly. Each logical network is identified by a unique name. APowerHA SystemMirror logical network might contain one or more subnets.

PowerHA SystemMirror communication interfacesA PowerHA SystemMirror communication interface is a grouping of a logical network interface, a serviceIP address and a service IP label that you defined in PowerHA SystemMirror.

PowerHA SystemMirror communication interfaces combine to create IP-based networks. A PowerHASystemMirror communication interface is a combination of:v A logical network interface is the name to which Linux resolves a port (for example, eth0) of a physical

network interface card.v A service IP address is an IP address (for example, 129.9.201.1) over which services, such as an

application, are provided, and over which client nodes communicate.

8 PowerHA SystemMirror for Linux Version 7.2.2

v A service IP label is a label (for example, a hostname in the /etc/hosts file, or a logical equivalent of aservice IP address, such as node_A_en_service) that maps to the service IP address.

Communication interfaces in PowerHA SystemMirror are used in the following ways:v A communication interface refers to IP-based networks and network interface cards (NIC). The NICs

that are connected to a common physical network are combined into logical networks that are used byPowerHA SystemMirror.

v Each NIC is capable of hosting several TCP/IP addresses. When configuring a cluster, you must definethe IP addresses that PowerHA SystemMirror monitors (base or boot IP addresses), and the IPaddresses that PowerHA SystemMirror keeps highly available (the service IP addresses).

v Heartbeating in PowerHA SystemMirror occurs over communication interfaces. PowerHASystemMirror uses the heartbeating facility of RSCT. For more information, see the “Heartbeating overTCP/IP and disk” on page 10 topic.

Configuring the netmon.cf fileThe netmon.cf file is an optional configuration file that customers can use to augment the ping operationon the available target hosts on the network. The target hosts that are not defined to be part of the clusterare not available from the cluster nodes. The target hosts can be accessed from the IP addresses beingmonitored by topology services.

If you are running a single-node or two-node cluster, you must configure netmon.cf file to detect thenetwork interface failures.

PowerHA SystemMirror periodically attempts to contact each network interface in the cluster. If theattempt to contact an interface fails on one node of a two-node cluster, the corresponding interface on theother node is also flagged as an Offline node. The other node is flagged as an Offline node because itdoes not receive a response from the peer cluster node. To avoid such behavior, the PowerHASystemMirror must be configured to contact a network instance outside of the cluster. You can use thedefault gateway of the subnet from which the PowerHA SystemMirror GUI is used.

To configure the netmon.cf file, complete the following steps:1. Configure the netmon.cf file to check the status of the network monitored by the virtual switch.2. On each node, create the following file: /var/ct/cfg/netmon.cf

Note: Each line of netmon.cf file contains the system name or the IP address of the external networkinstance. IP addresses can be specified in the dotted decimal notation.

Example of the netmon.cf file

The following example shows the configuration result for the netmon.cf file.#This is default gateway for all interfaces in the subnet 192.168.1.0

192.168.1.1

If you are using the Virtual I/O Server (VIOS), the configuration test becomes unreliable because thenetmon.cf file cannot determine whether the inbound traffic is received from the VIOS or a client. TheLPAR cannot distinguish a virtual adapter and a real adapter. To address this problem, the netmonlibrary supports up to 32 targets for each local network adapter. If the ping operation for any of thesetargets is successful, the local adapter is considered to be in an Online state. The targets can be specifiedin the netmon.cf file with the !REQD keyword. For example:!REQD <owner><target>

The targets can be also specified in the netmon.cf file by entering the following command:!IBQPORTONLY !ALL

Concepts 9

Location

/var/ct/cfg/netmon.cfLocation of the netmon.cf file in a PowerHA SystemMirror environment.

IP address takeoverIP address takeover is a mechanism for recovering a service IP label by moving it to another networkinterface card (NIC) on another node, when the initial NIC fails.

IPAT occurs if the physical network interface card on one node fails and if there are no other accessiblephysical network interface cards on the same network on the same node. Therefore, swapping IP labels ofthese NICs within the same node cannot be performed and PowerHA SystemMirror will use IPAT torecover the service IP address by using a NIC on a backup node. IP address takeover keeps the IPaddress highly available by recovering the IP address after failures. PowerHA SystemMirror uses amethod called IPAT via IP aliases.

IP address takeover by using IP aliasesDefining IP aliases to network interfaces allows creation of more than one IP label and address on thesame network interface.

When a resource group containing the service IP label falls over from the primary node to the targetnode, the service IP labels are added (and removed) as alias addresses on top of the base IP addresses onan available NIC. This allows a single NIC to support more than one service IP label placed on it as analias. Therefore, the same node can host more than one resource group at the same time.

When there a multiple interfaces on the same node connected to the same network, and those interfacesare not combined into a Ethernet Aggregation, all boot addresses must all be on different subnets. Also,any persistent addresses or service addresses must be on different subnets than the boot addresses.

Because IP aliasing allows coexistence of multiple service labels on the same network interface, you canuse fewer physical network interface cards in your cluster. Upon fallover, PowerHA SystemMirror equallydistributes aliases between available network interface cards.

Heartbeating over TCP/IP and diskA heartbeat is a type of a communication packet that is sent between nodes. Heartbeats are used tomonitor the health of the nodes, networks and network interfaces, and to prevent cluster partitioning.

In order for a PowerHA SystemMirror cluster to recognize and respond to failures, it must continuallycheck the health of the cluster. Some of these checks are provided by the heartbeat function.

Each cluster node sends heartbeat messages at specific intervals to other cluster nodes, and expects toreceive heartbeat messages from the nodes at specific intervals. If messages stop being received,PowerHA SystemMirror recognizes that a failure has occurred. Heartbeats can be sent over:v TCP/IP networksv A physical volume (disk) which is accessible from all clusters nodes

The heartbeat function is configured to use specific paths between nodes. This allows heartbeats tomonitor the health of all PowerHA SystemMirror networks and network interfaces, as well as the clusternodes themselves.

The heartbeat paths are set up automatically by RSCT; you have the option to configure disk paths aspart of PowerHA SystemMirror configuration.

10 PowerHA SystemMirror for Linux Version 7.2.2

Heartbeating over Internet Protocol networksPowerHA SystemMirror relies on RSCT to provide heartbeating between cluster nodes over InternetProtocol networks.

Heartbeating between cluster nodes can be configured by using the following command:clmgr add network netw_0 TYPE=etherpvid=qoPDC7-G169-UJv3-zHI5-KePI-Whjy-K73nrbnodes= nodeA, nodeB

Heartbeating over diskYou can configure a disk heartbeat optionally. After configuration, the Reliable Scalable ClusterTechnology (RSCT) automatically exchanges heartbeat over the shared disk. These connections canprovide an alternative heartbeat path for a cluster that uses a single TCP/IP-based network.

You can configure a disk heartbeat by using following command:clmgr add network netw_0 TYPE=diskpvid=qoPDC7-G169-UJv3-zHI5-KePI-Whjy-K73nrbnodes= nodeA, nodeB

Once the disk heartbeat configuration is complete, you can verify by using following command:clmgr query network netw_0

To check whether the heartbeat interface is properly functioning, run the following command:lssrc -ls cthats

The following output is generated:Subsystem Group PID Statuscthats cthats 372762 active

Network Name Indx Defd Mbrs St Adapter ID Group IDdhbTEST_CG [1] 2 2 S 255.255.10.1 255.255.10.1dhbTEST_CG [1] dhb-1-2 0x01c8f4fe 0x01c8f511

In the specified sample command output, find the disk heartbeat stanza by locating the name of the diskheartbeat network, which is obtained by using the clmgr query network command output. The name ofthe disk heartbeat interface resource is also displayed.

Positive result: Disk heartbeat is occurring if the Defd and Mbrs parameters are the same number:

Defd Mbrs2 2

Negative or error result: Disk heartbeat is not occurring if the Defd and Mbrs parameters are different:

Defd Mbrs2 1

If you get negative or error result for the disk heartbeat process, recheck the following options:v Port VLAN ID (PVID) on which disk heartbeat network is configured is available on both the cluster

nodes.v Storage connectivity with both the cluster nodes is working.v Physical volume on which disk heartbeat is configured is not being used by any other process, which

might block the IO on the device.

You can remove the existing heartbeat network by using following command:clmgr delete network netw_0

Concepts 11

Split policyA cluster split event can occur when a group of nodes cannot communicate with the remaining nodes ina cluster. A cluster split event splits the cluster into two or more partitions.

After a cluster split event, the subdomain that has most of nodes are retained and the other subdomainsare deleted. If exactly half of the nodes of a domain are online and the remaining nodes are inaccessible,PowerHA SystemMirror must determine which subdomain has operational quorum by using ReliableScalable Cluster Technology (RSCT). The subdomain with an operational quorum is retained and theother subdomains are deleted.

You can use PowerHA SystemMirror to configure a split policy that specifies the response to a clustersplit event.

To configure a split policy, the following options are available:

None

The None split policy allows each partition that is created by the cluster split event to become anindependent cluster and each partition is started independently of the other partition. A user cancheck the quorum status by using the following command: lssrc -ls IBM.RecoveryRM

Before a split event, operational quorum state is in HAS_QUORUM state and the configurationquorum is TRUE. A user can do both operational change such as moving the cluster online oroffline and configuration change such as adding resource group or deleting resource group intothe cluster.

However, after the spilt event, for each partition, the operational quorum state will beHAS_QUORUM but configurational quorum state will be FALSE. Hence, PowerHA SystemMirrorand the user can perform operational change but not the configurational change.

For example, in two-nodes cluster, if the cluster split event occurs, all the resources are online onboth the nodes. As both the nodes have quorum, if the merge event occurs, lower priority node isrebooted if a critical resource is running on it and only RSCT subsystems is restarted if anon-critical resource or no resource is running on it. You must use the None split policy if a userprefers multiple instances of an application that runs simultaneously after split event.

To set the cluster policy as None, enter the following command:clmgr modify cluster SPLIT_POLICY=none

Manual

The Manual split policy allows each partition that is created by the cluster split event to becomean independent cluster. However, each partition cannot start a workload, until the user allots theownership to the subcluster. Till then the Operational quorum state is in PENDING_QUORUMfor each subcluster. Hence, PowerHA SystemMirror does not perform any action. The user canprovide ownership by using the following command:runact -c IBM.PeerDomain ResolveOpQuorumTie Ownership=0/1

where

1 Changes the quorum state from PENDING_QUORUM to HAS_QUORUM

0 Changes the quorum state from PENDING_QUORUM to NO_QUORUM

The subcluster with NO_QUORUM state will have no authority to do any operational change onthe nodes. If critical resource is running on NO_QUORUM node, the node is rebooted to avoidcorruption of critical resource. The subcluster with HAS_QUORUM state has authority to do anyoperational change on the subcluster.

12 PowerHA SystemMirror for Linux Version 7.2.2

Before a split event, the operational quorum state is HAS_QUORUM and configuration quorum isTRUE. PowerHA SystemMirror allows both operational change and configurational change intothe cluster.

But after the split event, for each operation of the partition, the quorum state will bePENDING_QUORUM and configurational quorum will be FALSE. PowerHA SystemMirror doesnot allow the user to do any operational change not even the configurational changes.If a clustersplits, all the resources are in same state as they were before the split event. The user can givepermission as 1 to one subcluster and 0 to another cluster, so that the resources run only on onesub cluster. After giving permission, the PENDING_QUORUM will change to eitherHAS_QUORUM or NO_QUORUM.

After the split event, the node that has critical resource running on it is rebooted if 0 permissionis given. If the non-critical resource or no resource is running on that subcluster, it remains same.

Manual Policy can be used if a user to give permission after split event before creation of anotherinstance of application.

The subcluster with HAS_QUORUM state automatically allows a user to perform any operationalchanges if the merge event occurs,. The lower priority node is rebooted if a critical resource isrunning on it with an operational quorum state as HAS_QUORUM before merge and restarts thesubsystems when the non-critical resource or no resource is running on it.

To set the cluster policy as Manual, enter the following command:clmgr modify cluster SPLIT_POLICY=manual

Tiebreaker

You can use the tiebreaker option to specify a SCSI disk or a Network File System (NFS) file thatis used by the split and merge policies.

A tiebreaker disk or an NFS file is used when the sites in the cluster can no longer communicatewith each other. This communication failure results in the cluster splitting the sites into two,independent partitions. If failure occurs because the cluster communication links are notresponding, both partitions attempt to lock the tiebreaker disk or the NFS file. The partition thatacquires the tiebreaker disk continues to function, while the other partition reboots, or has clusterservices restarted, depending on if any critical resources are configured.

The disk or NFS-mounted file that is identified as the tiebreaker must be accessible to all nodes inthe cluster.

Split Policy Tie Breaker allow one of the partitions that is created by the cluster split event tobecome an independent cluster. After winning the tiebreaker, subcluster can start the workload orcan allow the user to perform an operational change. Whenever a split event occurs, theoperational quorum state remains in the PENDING_QUORUM state till one of the subclusterswins the tie breaker. The subcluster, which won the tie breaker starts the workload or allow theuser to do operational changes.

The following are types of tiebreaker in PowerHA SystemMirror for Linux:

NFS tiebreaker

When Network File System (NFS) type of tiebreaker is configured, one directory of theNFS server gets mounted on all nodes of the cluster. When a split event occurs due tonetwork failure on one node, that node loses the mounting of the NFS server directoryand becomes the losing node. The other node is the winning node.

Before you configure the NFS tiebreaker, a user must have proper understating likepermission, accessibility of the NFS server machine and must check the directory on theNFS server that is going to be mounted on the local directory of the nodes. To configurethe NFS tiebreaker in PowerHA SystemMirror for Linux, enter the following command:

Concepts 13

clmgr modify cluster clMain SPLIT_POLICY=tiebreakerTIEBREAKER=nfs NFS_SERVER=192.2.2.5 NFS_SERVER_MOUNT_POINT=/test_nfs_tieNFS_LOCAL_MOUNT_POINT=/test_nfs_local NFS_FILE_NAME=nfsTestConfig

where,v SPLIT_POLICY=tiebreaker is a type of split policy.v TIEBREAKER=nfs is type of tiebreaker.v NFS_SERVER=192.2.2.5 is the IP of nfs-server machine.v NFS_SERVER_MOUNT_POINT=/test_nfs_tie is the directory on the nfs-server machine.v NFS_LOCAL_MOUNT_POINT=/test_nfs_local is the directory on the nodes.v NFS_FILE_NAME=nfsTestConfig is any file name given by then user.

Before the split event, the quorum state can be seen by using the following command:lssrc -ls IBM.RecoveryRMOperational Quorum State : HAS_QUORUMIn Config Quorum : TRUE

After the split event occurs, the quorum status will be:Operational Quorum State : PENDING_QUORUMIn Config Quorum : FALSE

The quorum shows the following state after winning the tiebreaker on the winning node:Operational Quorum State : HAS_QUORUMIn Config Quorum : FALSE

The quorum shows the following state after losing the tiebreaker on the losing node:Operational Quorum State : NO_QUORUMIn Config Quorum : FALSE

The losing node is rebooted if the critical resource is running on it. Otherwise, it remainsas it is.

The winning node starts the workload if not running before otherwise it is running as itis and user can perform operational configuration changes. The losing node will mergeautomatically to the cluster after reboot if a critical resource is running on it. If anon-critical resource is running on the node, the RSCT subsystem is restarted on thatnode after the merge event.

The NFS tiebreaker must be used if a single-node interface fails. If two-nodes areconnected to two different switches and if one switch goes down, the node that isconnected to that switch will not be able to communicate. However, the second node canstill communicate because the other switch is working. The second node wins as it canreach the NFS server.

Disk tiebreaker

Disk tiebreaker uses disk to resolve the tie. When the cluster split event occurs, each nodetries to make a lock on the disk. The node, which locks the disk first becomes thewinning node. The other node is the losing node.

Before you configure the disk tiebreaker, the user must find out about the shared diskamong the nodes by using the following command:lsrsrc IBM.Disk

To configure the disk tiebreaker in PowerHA SystemMirror for Linux, enter the followingcommand:clmgr modify cluster SPLIT_POLICY=tiebreakerTIEBREAKER=disk DISK_WWID=36017595807eed37r0000000000000045

Where

14 PowerHA SystemMirror for Linux Version 7.2.2

SPLIT_POLICY=tiebreaker is type of split policyTIEBREAKER=disk is type of tiebreakerDISK_WWID=36017595807eed37r0000000000000045 is the shared disk used for tiebreaker.

The disk tiebreaker must be used if interface failure occurs on both the nodes. If both thenodes are connected to the same switch and if the switch goes down, the two nodescannot reach the NFS serve r. The winning node is decided by the locking system on thedisk. The node that locks the disk first is the winning node.

PowerHA SystemMirror resources and resource groupsThis topic includes resource-related concepts and definitions that are used throughout the documentationand also in the PowerHA SystemMirror user interface.

Identifying and keeping your cluster resourcesThe PowerHA SystemMirror software provides a highly available environment.

The PowerHA SystemMirror software does this by:v Identifying the set of cluster resources that are essential to the operation of an application, and

combining those resources into a resource group.v Defining the resource group policies and attributes that dictate how PowerHA SystemMirror manages

resources to keep them highly available during different cluster events like startup, fallover, andfallback.

By identifying resources and defining resource group policies, the PowerHA SystemMirror softwaremakes numerous cluster configurations possible, providing tremendous flexibility in defining a clusterenvironment tailored to individual requirements.

Identifying cluster resources

Cluster resources can include both hardware resources and software resources:v Service IP labels or addressesv Applications

The PowerHA SystemMirror software handles the resource group as a unit, thus keeping theinterdependent resources together on one node and keeping them highly available.

Types of cluster resourcesThis section provides a brief overview of the resources that you can configure in PowerHA SystemMirrorand include into resource groups to let PowerHA SystemMirror keep them highly available.

Applications:

The purpose of a highly available system is to ensure that critical services are accessible to users.Applications usually need no modification to run in the PowerHA SystemMirror environment. Anyapplication that can be successfully restarted after an unexpected shutdown is a candidate for PowerHASystemMirror.

For example, all commercial DBMS products provide a checkpoint on the state of the disk in some sort oftransaction journal. In the event of a server failure, the fallover server restarts the DBMS, whichreestablishes database consistency and then resumes processing.

Note: The start and stop scripts are the main points of control for PowerHA SystemMirror over anapplication. It is very important that the scripts you specify operate correctly to start and stop all aspectsof the application. If the scripts fail to properly control the application, other parts of the application

Concepts 15

recovery might be affected. For example, if the stop script you use fails to completely stop the applicationand a process continues to access a disk, PowerHA SystemMirror will not be able to recover it on thebackup node.

Add your application to a PowerHA SystemMirror resource group only after you have thoroughly testedyour application start and stop scripts.

The resource group that contains the application should also contain all the resources that the applicationdepends on, including service IP addresses. Once such a resource group is created, PowerHASystemMirror manages the entire resource group and, therefore, all the interdependent resources in it as asingle entity. PowerHA SystemMirror coordinates the application recovery and manages the resources inthe order that ensures activating all interdependent resources before other resources.

PowerHA SystemMirror includes application monitoring capability, whereby you can define a monitor todetect the unexpected termination of a process or to periodically poll the termination of an applicationand take automatic action upon detection of a problem.

Service IP labels and IP addresses:

A service IP label is used to establish communication between client nodes and the server node. Services,such as a database application, are provided using the connection made over the service IP label.

A service IP label can be placed in a resource group as a resource that allows PowerHA SystemMirror tomonitor its health and keep it highly available, either within a node or between the cluster nodes bytransferring it to another node in the event of a failure.

Note: Certain subnet requirements apply for configuring service IP labels.

Persistent node IP labels:

A persistent node IP label is a useful administrative tool that lets you contact a node even if the PowerHASystemMirror cluster services are down on that node.

When you define persistent node IP labels PowerHA SystemMirror attempts to put an IP address on thenode. Assigning a persistent node IP label to a network on a node allows you to have a node-bound IPaddress on a cluster network that you can use for administrative purposes to access a specific node in thecluster. A persistent node IP label is an IP alias that can be assigned to a specific node on a clusternetwork and that:v Always stays on the same node (is node-bound )v Co-exists on a network interface card that already has a service IP label definedv Does not require installing an additional physical network interface card on that node

There can be one persistent node IP label per node.

File system:

A file system is a simple database for storing files and directories. A file system in Linux is stored on asingle logical volume.

The main components of the file system are the logical volume that holds the data and the file systemlog. PowerHA SystemMirror supports both ext3 and xfs as shared file systems.

Cluster resource groupsTo be made highly available by the PowerHA SystemMirror software, each resource must be included ina resource group. Resource groups allow you to combine related resources into a single logical entity foreasier management.

16 PowerHA SystemMirror for Linux Version 7.2.2

|

|

||

||

You can configure the cluster so that certain applications stay on the same node, or on different nodes notonly at startup, but during fallover and fallback events. To do this, you configure the selected resourcegroups as part of a location dependency set.

Participating node list:

A participating node list defines a list of nodes that can host a particular resource group.

You define a node list when you configure a resource group. The participating node list can contain someor all nodes in the cluster.

Typically, this list contains all nodes sharing the same data and disks.

Default node priority:

Default node priority is identified by the position of a node in the node list for a particular resourcegroup.

The first node in the node list has the highest node priority. This node is also called the home node for aresource group. The node that is listed before another node has a higher node priority than the currentnode.

Home node:

The home node (the highest priority node for this resource group) is the first node that is listed in theparticipating node list for a nonconcurrent resource group.

The home node is a node that normally owns the resource group.

The term home node is not used for concurrent resource groups because they are owned by multiplenodes.

Startup, fallover, and fallback:

PowerHA SystemMirror ensures the availability of cluster resources by moving resource groups from onenode to another when the conditions in the cluster change.

Cluster startupWhen cluster services are started. resource groups are activated on different cluster nodesaccording the resource group startup policy you selected.

Node failureResource groups that are active on this node fall over to another node.

Node recoveryWhen cluster services are started on the home node after a failure, the node reintegrates into thecluster, and may acquire resource groups from other nodes depending on the fallback policy forthe group.

Resource failure and recoveryA resource group might fall over to another node, and be reacquired, when the resource becomesavailable.

Cluster shutdownWhen you stop cluster services you can choose to have the resource groups move to a backupnode or be taken offline on the current node.

Concepts 17

Resource group policies

The following table describes the specific options for each policy:

StartupStartup is the activation of a resource group on a node (or multiple nodes). Resource groupstartup occurs when cluster services are started.

Fallover

Fallover is the movement of a resource group from the node that currently owns the resourcegroup to another active node after the current node experiences a failure.

Fallover only occurs with nonconcurrent resource groups. concurrent resource groups are activeon all nodes concurrently, so the failure of a single nodes means only that instance of the resourcegroup on that node is affected.

Fallback

Fallback is the movement of resources to the home node when it is reintegrated in the clusterafter failure. No fallback means that resources continue to run on the same node, even after thereintegration of home node post failure.

Each combination of these policies allows you to specify varying degrees of control over whichnode, or nodes, control a resource group.

Startup, fallover, and fallback are specific behaviors that describe how resource groups behave atdifferent cluster events. It is important to keep in mind the difference between fallover andfallback. These terms appear frequently in discussion of the various resource group policies.

Networks and resource groupsA service IP label can be included in any nonconcurrent resource group - that resource group could haveany of the allowed startup policies except Online on All Available Nodes.

PowerHA SystemMirror cluster configurationsThis chapter provides examples of the types of cluster configurations supported by the PowerHASystemMirror software.

This list is by no means an exhaustive catalog of the possible configurations you can define using thePowerHA SystemMirror software. Rather, use them as a starting point for thinking about the clusterconfiguration best suited to your environment.

Standby configurationsStandby configurations are the traditional redundant hardware configurations where one or more standbynodes stand idle or running a less critical application, and will takeover the critical application should aserver or primary node fail, waiting for a server node to leave the cluster.

Concurrent resource groups are activated on all nodes concurrently and therefore cannot be used in astandby configuration.

Example: Standby configurations with online on first available node startup policy

18 PowerHA SystemMirror for Linux Version 7.2.2

In this setup, the cluster resources are defined as part of a single resource group. A node list is thendefined as consisting of two nodes. The first node, Node A, is assigned a takeover (ownership) priority of1. The second node, Node B, is assigned a takeover priority of 2.

At cluster startup, Node A (which has a priority of 1) assumes ownership of the resource group. Node Ais the "server" node. Node B (which has a priority of 2) stands idle, ready should Node A fail or leave thecluster. Node B is, in effect, the "standby".

If the server node leaves the cluster, the standby node assumes control of the resource groups owned bythe server, starts the highly available applications, and services clients. The standby node remains activeuntil the home node rejoins the cluster (based on the fallback policy configured). At that point, thestandby node releases the resource groups it has taken over, and the server node reclaims them. Thestandby node then returns to an idle state.

Extending standby configurations

The standby configuration from the previously described example can be easily extended to largerclusters. The advantage of this configuration is that it makes better use of the hardware. Thedisadvantage is that the cluster can suffer severe performance degradation if more than one server nodeleaves the cluster.

The following figure illustrates a three-node standby configuration using the resource groups with thesepolicies:v Startup policy: Online on First Available Nodev Fallover policy: Fallover to Next Priority Node in the Listv Fallback policy: Fallback to Home Node

Figure 2. One-for-one standby configuration where IP label returns to the home node

Concepts 19

In this configuration, two separate resource groups (A and B) and a separate node list for each resourcegroup exist. The node list for Resource Group A consists of Node A and Node C. Node A has a takeoverpriority of 1, while Node C has a takeover priority of 2. The node list for Resource Group B consists ofNode B and Node C. Node B has a takeover priority of 1; Node C again has a takeover priority of 2. (Aresource group can be owned by only a single node in a nonconcurrent configuration.)

Since each resource group has a different node at the head of its node list, the cluster's workload isdivided, or partitioned, between these two resource groups. Both resource groups, however, have thesame node as the standby in their node lists. If either server node leaves the cluster, the standby nodeassumes control of that server node's resource group and functions as the departed node.

In this example, the standby node has three network interfaces (not shown) and separate physicalconnections to each server node's external disk. Therefore, the standby node can, if necessary, take overfor both server nodes concurrently. The cluster's performance, however, would most likely degrade whilethe standby node was functioning as both server nodes.

Takeover configurationsIn the takeover configurations, all cluster nodes do useful work, processing part of the cluster's workload.There are no standby nodes. Takeover configurations use hardware resources more efficiently thanstandby configurations since there is no idle processor. Performance can degrade after node failure,however, since the load on remaining nodes increases.

One-sided takeoverThis configuration has two nodes actively processing work, but only one node providing highly availableservices to cluster clients. That is, although there are two sets of resources within the cluster (for example,two server applications that handle client requests), only one set of resources needs to be highlyavailable.

The following figure illustrates a two-node, one-sided takeover configuration. In the figure, a lowernumber indicates a higher priority.

Figure 3. One-for-two standby configuration with three resource groups

20 PowerHA SystemMirror for Linux Version 7.2.2

This set of resources is defined as a PowerHA SystemMirror resource group and has a node list thatincludes both nodes. The second set of resources is not defined as a resource group and, therefore, is nothighly available.

At cluster startup, Node A (which has a priority of 1) assumes ownership of Resource Group A. Node A,in effect, “owns” Resource Group A. Node B (which has a priority of 2 for Resource Group A) processesits own workload independently of this resource group.

If Node A leaves the cluster, Node B takes control of the shared resources. When Node A rejoins thecluster, Node B releases the shared resources.

If Node B leaves the cluster, however, Node A does not take over any of its resources, since Node B'sresources are not defined as part of a highly available resource group in whose chain this nodeparticipates.

This configuration is appropriate when a single node is able to run all the critical applications that needto be highly available to cluster clients.

Mutual takeoverThe mutual takeover for nonconcurrent access configuration has multiple nodes, each of which providesdistinct highly available services to cluster clients. For example, each node might run its own instance ofa database and access its own disk.

Furthermore, each node has takeover capacity. If a node leaves the cluster, a surviving node takes overthe resource groups owned by the departed node.

The mutual takeover for nonconcurrent access configuration is appropriate when each node in the clusteris running critical applications that need to be highly available and when each processor is able to handlethe load of more than one node.

The following figure illustrates a two-node mutual takeover configuration for nonconcurrent access. Inthe figure, a lower number indicates a higher priority.

Figure 4. One-sided takeover configuration with resource groups in which IP label returns to the home node

Concepts 21

The key feature of this configuration is that the cluster's workload is divided, or partitioned, between thenodes. Two resource groups exist, in addition to a separate resource chain for each resource group. Thenodes that participate in the resource chains are the same. It is the differing priorities within the chainsthat designate this configuration as mutual takeover.

The chains for both resource groups consist of Node A and Node B. For Resource Group A, Node A has atakeover priority of 1 and Node B has a takeover priority of 2. For Resource Group B, the takeoverpriorities are reversed. Here, Node B has a takeover priority of 1 and Node A has a takeover priority of 2.

At cluster startup, Node A assumes ownership of the Resource Group A, while Node B assumesownership of Resource Group B.

If either node leaves the cluster, its peer node takes control of the departed node's resource group. Whenthe "owner" node for that resource group rejoins the cluster, the takeover node relinquishes the associatedresources; they are reacquired by the integrating home node.

Two-node mutual takeover configurationIn this configuration, both nodes have simultaneous access to the shared disks and own the same diskresources.

The following figure illustrates a two-node mutual takeover configuration for concurrent access:

Figure 5. Mutual takeover configuration for nonconcurrent access

22 PowerHA SystemMirror for Linux Version 7.2.2

In this example, both nodes are running an instance of a server application that accesses the database onthe shared disk. The application's proprietary locking model is used to arbitrate application requests fordisk resources.

Running multiple instances of the same server application allows the cluster to distribute the processingload. As the load increases, additional nodes can be added to further distribute the load.

Figure 6. Two-node mutual takeover configuration for concurrent access

Concepts 23

24 PowerHA SystemMirror for Linux Version 7.2.2

Planning for PowerHA SystemMirror for Linux

All the relevant Red Hat Package Manager (RPM) are installed automatically on starting the PowerHASystemMirror installation script.

To install the PowerHA SystemMirror for Linux, the following RPMs are used internally:v powerhasystemmirror

v powerhasystemmirror.adapter

v powerhasystemmirror.policies

v powerhasystemmirror.policies.one

v powerhasystemmirror.policies.two

v powerhasystemmirror.sappolicy

PowerHA SystemMirror internally uses Reliable Scalable Cluster Technology (RSCT) (RSCT) for theclustering technology. RSCT Version 3.2.2.4 is included in the PowerHA SystemMirror package. Therequired versions of the RSCT RPM are installed automatically by default when you install the PowerHASystemMirror. During installation of the PowerHA SystemMirror, if RSCT is detected, and the level ofRSCT is lower than the required RSCT package, then the currently installed RSCT package is upgraded.PowerHA SystemMirror for Linux installs the following RSCT RPMs:v rsct.basic

v rsct.core

v rsct.core.utils

v rsct.opt.storagerm

v src

v If you define the disk tiebreaker resources, the disk on which IBM.TieBreaker resources are stored mustnot be used to store file systems.

v Internet Protocol version 6 (IPv6) configuration is not supported in the PowerHA SystemMirror forLinux.

v You can check the firewall status by running the systemctl status firewalld.service command. Firewallmust be disabled or you must open the below ports:657/tcp16191/tcp657/udp12143/udp12347/udp12348/udp

When a node is configured with multiple connections to a single network, the network interfaces servedifferent functions in the PowerHA SystemMirror.

A service interface is a network interface that is configured with the PowerHA SystemMirror service IPlabel. The service IP label is used by clients to access application programs. The service IP is onlyavailable when the corresponding resource group is online.

A persistent node IP label is an IP alias that can be assigned to a specific node on a cluster network. Apersistent node IP label always stays on the same node (node-bound), and coexists on an NIC thatalready has a service or boot IP label defined. A persistent node IP label does not require installing anextra physical NIC on that node.

© Copyright IBM Corp. 2017, 2018 25

If you assign a persistent node IP label, it provides a node-bound address that you can use foradministrative purposes because a connection to a persistent node IP label always goes to a specific nodein the cluster. You can have one persistent node IP label per node.

For PowerHA SystemMirror, you must configure a persistent IP label for each cluster node. This is usefulto access a particular node in a PowerHA SystemMirror cluster for running reports or for diagnostics.This provides advantage that the PowerHA SystemMirror can access the persistent IP label on the nodedespite individual NIC failures, provided spare NICs are present on the network.

If you assign IP aliases to NICs, it allows you to create more than one IP label on the same networkinterface. During an IP address takeover by using the IP aliases, when an IP label moves from one NIC toanother, the target NIC receives the new IP label as an IP alias and keeps the original IP label andhardware address.

Configuring networks for IP address takeover (IPAT) by using IP aliases simplifies the networkconfiguration in the PowerHA SystemMirror. You can configure a service address and one or more bootaddresses for NICs.

PowerHA SystemMirror uses a technology referred to as IPAT by using IP aliases for keeping IPaddresses highly available.

If you are planning for IP address takeover by using IP aliases, review the following information:v Each network interface must have a boot IP label that is defined in the PowerHA SystemMirror. The

interfaces that are defined in the PowerHA SystemMirror are used to keep the service IP addresseshighly available.

v The following subnet requirements apply if multiple interfaces are present on a node that is attached tothe same network:– All boot addresses must be defined on different subnets.– Service addresses must be on a different subnet from all boot addresses and persistent addresses.

v Service address labels that are configured for IP address takeover by using IP aliases can be included inall nonconcurrent resource groups.

v The netmask for all IP labels in a PowerHA SystemMirror for Linux network must be the same.

During a node fallover event, the service IP label that is moved is placed as an alias on the NIC of targetnode in addition to any other service labels configured on that NIC.

If your environment has multiple adapters on the same subnet, all the adapters must have the samenetwork configuration and the adapters must be part of the PowerHA SystemMirror configuration.

Linux operating system requirements

The cluster node on which you want to install PowerHA SystemMirror for Linux must be running oneither one of the following versions of the Linux operating system:v SUSE Linux Enterprise Server (SLES) 12 SP1 (64-bit)v SLES 12 SP2 (64-bit)v SLES 12 SP3 (64-bit)v SLES for SAP 12 SP1 (64-bit)v SLES for SAP 12 SP2 (64-bit)v SLES for SAP 12 SP3 (64-bit)v Red Hat Enterprise Linux (RHEL) 7.2 (64-bit)v Red Hat Enterprise Linux (RHEL) 7.3 (64-bit)v RHEL 7.4 (64-bit)

26 PowerHA SystemMirror for Linux Version 7.2.2

Note: PowerHA SystemMirror Version 7.2.2 for Linux is not supported on SLES 11 for SAP.

Planning 27

28 PowerHA SystemMirror for Linux Version 7.2.2

Installing PowerHA SystemMirror for Linux

You must understand the planning and prerequisite information about PowerHA SystemMirror for Linux,before you install PowerHA SystemMirror for Linux.

Planning the installation of PowerHA SystemMirror for LinuxBefore you install PowerHA SystemMirror in your Linux environments, you must ensure that allprerequisites are met.

Packaging

You can download PowerHA SystemMirror for Linux from the IBM website.

Prerequisites

You must fulfill the software and hardware requirements for PowerHA SystemMirror for Linux. Beforeyou install the PowerHA SystemMirror on a Linux system, you must meet the following prerequisites:v Root authority is required to install PowerHA SystemMirror.v The following scripting package is required in each SUSE Linux Enterprise Server and Red Hat

Enterprise Linux (RHEL) system:– KSH93 (ksh-93vu-18.1.ppc64le) for SLES and KSH93 (ksh-20120801) for RHEL– PERL– lvm2You must use the Korn shell version ksh93vu suggested by SUSE. Download the KSH93 package fromthe SUSE website. The downloaded files contain the source package for the ksh-93vu-18.1.src.rpmfiles that must be compiled to get the ksh-93vu-18.1.ppc64le.rpm files. To know more about RPMpackages, refer to the SUSE technical notes (Section 6.2.6 Installing and Compiling Source Packages)in the SUSE Documentation website.

v The following packages are required in the RHEL system:– bzip2

– nfs-utils

– perl-Pod-Parser

– bind-utils

v Additionally, the following packages must be installed in RHEL systems for disk heartbeatconfiguration:– lsscsi

– sg3_utils

Checking prerequisites

To verify whether all prerequisites are met, complete the following steps:1. Log in as root user.2. After you have downloaded the tar file from the IBM website, extract the tar file by entering the

following command:tar -xvf <tar file>

3. Enter the following command:cd PHA7221Linux64

© Copyright IBM Corp. 2017, 2018 29

|

|

|

4. To start the prerequisites check, enter the following command:./installPHA --onlyprereqcheck

5. When the check is complete, check the following log file for information about missing prerequisites:/tmp/installPHA.<#>.log

The hashtag (<#>) is a number; the highest number identifies the most recent log file.6. If your system did not pass the prerequisites check, correct any problems before you start the

installation.

Installing PowerHA SystemMirror for LinuxAfter you complete the planning steps, you are ready to install PowerHA SystemMirror for Linux in yourLinux environment.

Running the installation

To install PowerHA SystemMirror for Linux, you must use the installation script. The installation scriptruns the following actions:

Prerequisites check The installation script runs a complete prerequisite check to verify that all required software areavailable and are at the required level. If your system does not pass the prerequisite check, theinstallation process does not start. To continue with the installation process, you must install therequired software.

Installing PowerHA SystemMirror for LinuxIf an IBM Reliable Scalable Cluster Technology (RSCT) peer domain exists, ensure that the nodeon which you are running the script is offline in the domain. Otherwise, the installation iscanceled. To install the PowerHA SystemMirror for Linux, complete the following steps:1. Log in as root user.2. Download the tar file from the Entitled Systems Support website and extract the tar file by

entering the following command:tar -xvf <tar file>

3. Run the following installation script:./installPHA

Note: You do not need to specify any of the options that are available for the installPHAcommand.

4. The installation program checks prerequisites to verify that all the required software areavailable and are at the required level. If your system does not pass the prerequisites check,the installation does not start, and you must correct any problems before you restart theinstallation. Information about the results of the prerequisites check is available in thefollowing log file:/tmp/installPHA.<#>.log

5. After the system passes the prerequisite check, read the information in the license agreementand the license information. You can scroll forward line-by-line by using the Enter key, andpage-by-page with the space bar. After reviewing the license information, to indicateacceptance of the license terms and conditions, press the Y key. Any other input cancels theinstallation.

6. After you accept the license agreement, installation proceeds. You must check the followinglog file for information about the installation:/tmp/installPHA.<#>.log

The hashtag (<#>) is a number; the highest number identifies the most recent log file.

30 PowerHA SystemMirror for Linux Version 7.2.2

7. You can verify the installation process by running the lssrc -a command that will list theavailable subsystems. After installation, you must ensure that the clcomd subsystem isavailable and active in the list.

Related information:

Video: Installing PowerHA SystemMirror for Linux

Cluster snapshot for PowerHA SystemMirror for LinuxIn PowerHA SystemMirror for Linux, you can use the cluster snapshot utility to save and restore clusterconfigurations. Snapshots provide useful information about troubleshooting cluster problems and can beused to duplicate a cluster configuration on different hardware configuration.

The cluster snapshot utility saves record of all data that defines a specific cluster configuration. Snapshotrestoration is the process of recreating a specific cluster configuration by using the cluster snapshot utility.You can restore a snapshot on nodes even if a cluster does not exist.

You can use the clmgr snapshot command with actions such as add, modify, query, or delete to utilizethe snapshot utility feature.Related information:clmgr command

Installing 31

|||

32 PowerHA SystemMirror for Linux Version 7.2.2

Configuring PowerHA SystemMirror for Linux

After you install PowerHA SystemMirror for Linux, you can configure the product by creating a clusterand by adding nodes to the cluster.

Creating a cluster for PowerHA SystemMirror for LinuxYou can configure the basic components of a cluster by using the options of the clmgr command.

The configuration path significantly automates the discovery and selection of configuration informationand chooses default behaviors.

Prerequisite tasks for configuring the cluster

Before you configure a cluster, PowerHA SystemMirror for Linux must be installed on all nodes andconnectivity must exist between the node where you are performing the configuration and all othernodes that need to be included in the cluster.

Network interfaces must be both physically and logically configured with the Linux operating system sothat communication occurs from one node to each of the other nodes. The host name and IP addressmust be configured on each interface.

Note: All host name, service IP address, persistent IP address, and labels must be configured in the/etc/hosts file.

All node IP addresses must be added to the /etc/cluster/rhosts file before you configure the cluster toverify that information is collected from the systems that belong to the cluster. PowerHA SystemMirroruses all configured interfaces on the cluster nodes for cluster communication and monitoring. Allconfigured interfaces are used to keep cluster IP addresses highly available.

Configuring a cluster

To configure a typical cluster component, complete the following steps:

Configure a clusterTo configure a cluster, use clmgr add cluster command.

Configure additional topology componentsTo configure additional components for a cluster, you can perform the following actions:v Add or delete additional nodes from the cluster. For instructions, see “Adding a node to a

cluster for PowerHA SystemMirror for Linux” on page 34.v Configure PowerHA SystemMirror resources that are used to configure service IP and

persistent IP address. For instructions, see “Configuring resources for PowerHA SystemMirrorfor Linux” on page 34.

Configure the cluster resourcesConfigure the resources to be made highly available. For instructions, see “Configuring resourcesfor PowerHA SystemMirror for Linux” on page 34 to configure resources that must be sharedamong the nodes in the cluster. You can configure the following resources:v Application (scripts to start, stop, and monitor applications)v Service IPv Persistent IPv File system

© Copyright IBM Corp. 2017, 2018 33

|

Configure the resource groupsTo create the resource groups for each set of related resources, use the clmgr add resource_groupcommand. Additionally, you can add resources to a resource group when you create the resourcegroup.

Assign resources to the respective resource groupTo assign resources to each resource groups while creating a resource group, use the clmgr addresource_group command. To modify an existing resource group, use the clmgr modifyresource_group command.

Manage and view log dataTo adjust the log file view and log file management, use the clmgr modify log or clmgr view logcommand. This step is an optional step.

Display the cluster configurationTo view the cluster topology and resources configuration, use the clmgr query cluster or clmgrquery resource_group command. This step is optional.

Configure disk heartbeat configurationTo configure disk heartbeat, use the clmgr add network command. This step is optional.

Perform additional configurationsThe following optional cluster configuration steps can be performed depending on applicationrequirements:v Configuring runtime policies of a resource groupv Configuring dependencies between resource groupsv Adding more applicationsv Configuring file collectionsv Configuring a cluster user or user group

Related information:clmgr command

Video: Creating a cluster

Adding a node to a cluster for PowerHA SystemMirror for LinuxYou can use the clmgr command to add a node to an existing cluster. You can add a node to an existingcluster dynamically.

In this scenario, if you are creating a cluster that is named clMain, and if the participating nodes arenodeA and nodeB, enter the following command:clmgr create cluster clMain NODES=nodeA,nodeB

After creating the clMain cluster with nodeA and nodeB, you can also add nodeC to the clMain cluster byusing the clmgr add node nodeC command.Related information:clmgr command

Configuring resources for PowerHA SystemMirror for LinuxYou can configure resources that are required by the cluster application by using the clmgr command.

You must first define resources that are made available by the PowerHA SystemMirror for Linux for anapplication and then group them together in a resource group. You can add all the resources at once orseparately.

34 PowerHA SystemMirror for Linux Version 7.2.2

The following types of resources can be configured in a cluster:

ApplicationThe scripts that are used to start, stop, and monitor the application.

PowerHA SystemMirror service IP labels or IP addressesThe service IP label or IP address is the IP label or IP address over which services are providedand which is kept highly available by PowerHA SystemMirror.

PowerHA SystemMirror persistent IP labels or IP addressesThe persistent IP label or IP address is the IP label or IP address that allows you to have anode-bound address on a cluster network that you can use for administrative purposes to accessa specific node in the cluster.

File systemA file system is a simple database for storing files and directories. A file system in Linux is storedon a single logical volume. The main components of the file system are the logical volume thatholds the data and the file system log. PowerHA SystemMirror supports both ext3 and xfs asshared file systems.

Configuring a persistent node IP labels or IP address

A persistent node IP label is an IP alias that can be assigned to a specified node on a network. Apersistent IP label has the following features:v Always remains on the same node.v Co-exists with other IP labels that are present in an interface.v Does not require an additional physical interface in a node.v Always remains in an interface of the PowerHA SystemMirror network.

You can assign a persistent node IP label to a network on a node that allows you to have a node-boundaddress on a cluster network that you can use for administrative purposes to access a specific node in thecluster.

To add a persistent node IP label, enter the following command:clmgr add persistent_ip pip_node1

NETWORK=net_ether_01 NODE=node1 NETMASK=<255.255.255.0>

In this example, the persistent node IP label named pip_node1 is assigned to the node1 node and usesinterfaces that are defined in the net_ether_01 network.

Consider the following limitations for using the persistent node IP label or IP address:v You can define only one persistent IP label on each node.v Persistent node IP labels are available at the boot time of a node.v You must configure persistent node IP labels individually on each node.v To change or show persistent node IP labels, you must use the Modify and Query ACTION and

persistent node IP label in the clmgr command.

Configuring the service IP labels or IP address

Service IP labels and IP addresses are used to establish communication between client nodes and theserver node. Services such as a database application are provided by using the connection that is madeover the service IP label. The /etc/hosts file on all nodes must contain all IP labels and associated IPaddresses that you define for the cluster that includes service IP labels and addresses.

To add a service IP label, enter the following command:

Configuring 35

|||||

clmgr add service_ip sip

NETWORK=net_ether_01 NETMASK=<255.255.255.0>

In this example, the service IP label named sip is established and is made available by using thenet_ether_01 network.

Note: Enter the service IP label that you want to keep highly available. The name of the service IP labelor IP address must be unique within the cluster and distinct from the resource group names and it mustrelate to the application and also to any corresponding device. For example, sap_service_address.

Configuring a PowerHA SystemMirror application

A PowerHA SystemMirror application is a cluster resource that is used to control an application that userwant to make highly available. It contains scripts for starting, stopping, and monitoring an application.

To configure an application, enter the following command:clmgr add application app_1 TYPE=custom

STARTSCRIPT="/path/to/startscript"

STOPSCRIPT="/path/to/stopscript"

MONITORMETHOD="/path/to/monitorscript"

RESOURCETYPE=1

In this example, an application named app_1 will be a non-concurrent application because theRESOURCETYPE flag is set to 1. The app_1 application uses the STARTSCRIPT flag for starting application,the MONITORMETHOD flag for monitoring application, and the STOPSCRIPT flag for stopping application.

The application process which is invoked from the STARTSCRIPT must be detached from the calling scriptby using either of the following method:v Redirect all file handles to a file and start the application process in the background. For example

/path/to/application >/outputfile 2>&1 &

v Create a wrapper application that uses the setsid() C-function to detach the application process from thecalling STARTSCRIPT.

When you configure an application, the application performs the following actions:v Associates a meaningful name with the application. For example, the application you are using with

PowerHA SystemMirror is named sapinst1. You use this name to refer to the application when youdefine it as a resource. When you set up the resource group that contains this resource, you define anapplication as a resource.

v Allows you to configure application start, stop, and monitoring scripts for that application.v Reviews the vendor documentation for specific product information about starting and stopping a

particular application.v Verifies that the scripts exist and has an executable permissions on all nodes that participate as possible

owners of the resource group where this application is defined.

The clmgr add application command includes the following attributes:

ApplicationEnter an ASCII text string that identifies the application. You use this name to refer to theapplication when you add it to the resource group. The application name can includealphanumeric characters and underscores.

STARTSCRIPTEnter the full path name of the script followed by arguments that are called by the event scripts

36 PowerHA SystemMirror for Linux Version 7.2.2

of the cluster to start the application. Although, this script must have the same name and locationon every node, the content and function of the script can be different. You can use the same scriptand runtime conditions to modify the runtime behavior of a node.

STOPSCRIPTEnter the full path name of the script that is called by the cluster event scripts to stop theapplication. This script must be in the same location on each cluster node that can start theapplication. Although, this script must have the same name and location on every node, thecontent and function of the script can be different. You can use the same script and runtimeconditions to modify runtime behavior of a node.

MONITORMETHODEnter a script or run an executable file to customize how you want to monitor the health of thespecified application. This is a mandatory field. The program name must be specified as anabsolute path in the file system of the target nodes.

TYPE You can select either the process application monitoring or custom application monitoringmethod. The process application monitoring method detects the termination of one or moreprocesses of an application. The custom application monitoring method checks the health of anapplication by using the monitor script at user-specified polling intervals. The custom monitorscript returns a value of 1 for ONLINE or 2 for OFFINE status.

RESOURCETYPEThis attribute reflects the type of application resource. The resource can be either non-concurrent,which means that the resources are serially reusable across multiple nodes, or the resource can beconcurrently accessible on multiple nodes. By default, the resource type is non-concurrent.

Configuring resource groups for PowerHA SystemMirror for LinuxYou can configure resource groups that use different startup, fallover, and fallback policies.

To configure a resource group, enter the following command:clmgr add resource_group RG1SERVICE_IP=sip APPLICATIONS=app_1

In this example, the RG1 resource group is non-concurrent resource group that uses default policies. Theresource group contains the sip IP address and the app_1 application.

Configuring a resource group includes the following tasks:v Configuring the resource group name, startup, fallover, and fallback policies, and the nodes that can

own it (use the nodelist attribute for a resource group).v Adding resources and additional attributes to the resource group.

If you are defining the resources in your resource groups, ensure that you are aware of followinginformation:v A resource group might include multiple service IP addresses. According to the resource group

management policies in PowerHA SystemMirror, when a resource group is moved, all service labels inthe resource group are moved as aliases to the available interfaces.

v When you define a service IP label or IP address on a cluster node, the service IP label can be used inany non-concurrent resource group.

The clmgr add resource_group command includes the following attributes:

Configuring 37

Table 4. Add resource group fields

Field Value

Resource Group Enter the name for this group. The name of the resource group must be unique within the clusterand distinct from the service IP label and an application name. It is helpful to create the resourcegroup name that is related to the application it serves, and also to any corresponding device. Forexample, sap_service_address. Do not use reserved words as resource name. You can check the listof reserved words. You cannot add duplicate entries.

NODES Enter the names of the nodes that can own or take over this resource group. Enter the node withthe highest priority for ownership first, followed by the nodes with the lower priorities, in thespecific order.

STARTUP Defines the startup policy of the resource group:

ONLINE ON FIRST AVAILABLE NODEThe resource group activates on the first node that becomes available.

ONLINE ON ALL AVAILABLE NODESThe resource group is made online on all nodes. This is similar to the behavior ofconcurrent resource group.

If you select this option for the resource group, ensure that resources in this group can bemade online on multiple nodes simultaneously.

FALLOVER Select a value from the list that defines the fallover policy of the resource group:

FALLOVER TO NEXT PRIORITY NODE IN THE LISTIn the case of a fallover, the resource group that is online on only one node at a timefollows the default node priority order that is specified in the nodelist attribute of theresource group (it moves to the highest priority node this is currently available).

BRING OFFLINESelect this option to make the resource group offline.

FALLBACK Select a value from the list that defines the fallback policy of the resource group:

NEVER FALLBACKA resource group does not fall back when a higher priority node joins the cluster.

FALLBACK TO Home NodeA resource group falls back to the Home Node.

SERVICE_LABEL This option is applicable only if you are adding resources to a non-concurrent resource.

List the service IP labels to be taken over when this resource group is taken over. These includeaddresses that are available but were previously taken over or that might be taken over.

APPLICATION Specify the application to include in the resource group.

FILESYSTEM Specify the file system resource to include in the resource group.

You can modify resource group attributes or the different resources such as application, service IP label,or IP address by using the clmgr modify resource_group command.

When a resource group is modified while it is an Online state, it is temporarily brought down and thenbrought back to original state by using the clmgr command.

Note: The maximum number of resource groups that are supported is 32.

Configuring a Shared File System resourceYou can configure file system resources by using the clmgr command.

Prerequisites

To verify the Shared storage functionality, complete the following steps:1. Before you configure a filesystem, verify that disk is shared between the relevant nodes of cluster. File

system resource cannot be added into resource group, which have some nodes in the node list wherethe disk of that filesystem is not shared. You can check the shared disk by using one of followingcommands:

38 PowerHA SystemMirror for Linux Version 7.2.2

|

|

|

|

|

||||

a. multipath -ll

b. lsrsrc -Ab IBM.Disk

2. Create a partition on the shared disk by using the fdisk /dev/<device> command. Complete thefollowing steps:a. Use the variable n for new and d for delete option.b. Change the fd partition type by using the t option.c. In the end, save all updates by using the wq option.

3. Create a physical volume on the partition by using the pvcreate <partition> command. You can listall physical volumes by using the pvdisplay option.

4. Create a volume group on the partition by using the vgcreate <name of VG> <PV> [PV2....]command. You can list all volume groups by using the vgdisplay option.

5. Create a logical volume on the partition by using the lvcreate –L <LV size with G or Mdesignation> /dev/<name of VG> command. You can list all logical volumes by using the lvdisplayoption.

Configuring

To configure the Shared storage functionality, complete the following steps:1. After you complete the prerequisite, check whether the logical volume is available on all the nodes of

cluster by using the lvdisplay option or by using the clmgr query logical_volume command query.2. If the logical volume is not visible on all the nodes, run the partprobe command on all the nodes of

cluster or try rebooting the nodes where the logical volumes are not visible..

After the logical volume is visible on all the nodes of cluster, configure a file system by using thefollowing command:clmgr add file_system /home/fs_1 TYPE=ext3 LOGICAL_VOLUME="vg_1-lv_1"

Where, type is ext3, which is File System type, /home/fs_1 is name for the file system resource, andvg_1-lv_1 is the logical volume, which is shared among the nodes and must be present in clmgr querylogical_volume command query.

After the file system is created, you can add it to resource group along with application. Then, change theresource group to an Online state.

Verifying the standard configurationPowerHA SystemMirror for Linux verification process checks whether specific PowerHA modifications tothe Linux system files are correct, the cluster and the corresponding resources are configured correctly,security is configured correctly, all nodes agree on the cluster topology, network configuration, and theownership and takeover of PowerHA resources.

After you have configured, reconfigured, or updated a cluster, you must run the cluster verificationprocedure. The following checks are run for the verification for PowerHA SystemMirror for Linux:1. Determine whether there is enough free space to run PowerHA SystemMirror for Linux.v Minimum 100 MB of free space must be available in the /usr/sbin,/opt, and /var directories.v On each node of the cluster, m 128 MB RAM must be available.

2. Check installed cluster filesets on all nodes.v To check if Tivoli System Automation (TSA) filesets are correctly installed on all the cluster nodes,

run the Linux – rpm command.3. Check the installed Reliable Scalable Cluster Technology (RSCT) filesets on all nodes.

Configuring 39

|

|

||

|

|

|

||

||

|||

|

|

||

||

||

|

|||

||

|

||||

||

|

|

|

|

||

|

v To check if the RSCT filesets are correctly installed on all the cluster nodes, run the Linux – rpmcommand.

4. Verify whether the start, stop, or monitor scripts that are used for an application are executable on allnodes.v Verify scripts that are provided by the user to start or an application.v Verify the scripts that are provided by the user to stop the application.v Verify the scripts that are provided by the user to monitor the application.

5. Verify the availability of the logical volumes on all cluster nodes.6. If the file system is mounted, verify the size of logical volumes to know that how much of the logical

volume free.7. Verify whether the same disk is part of multiple file system resources to give an appropriate warning.8. Verify the availability of volume groups on all cluster nodes.9. Verify the size of logical volumes on all the cluster nodes.

To start the verification process, run the following commands:clmgr verify cl -hclmgr verify clusterverify => validate

40 PowerHA SystemMirror for Linux Version 7.2.2

||

||

|

|

|

|

||

|

|

|

|

|||

|

Configuring dependencies between resource groups

In PowerHA SystemMirror, a dependency or a relationship exists between a source resource group andone or more target resources groups.

Dependency

To configure a dependency between two resource groups, enter the following command:clmgr add dependency NAME=SA TYPE=STARTAFTER SOURCE=RG1 TARGET=RG2

Where,

SA Customized name that is used to define a dependency.

RG1 Source resource group.

RG2 Target resource group.

STARTAFTERType of a dependency or a relationship you want to configure between both resource groups.

The configurable dependencies have the following attributes:

Name Specifies the dependency name defined by the user.

SourceSpecifies the source resource group name.

Target Specifies the list of target resource group names.

Type

Specifies the dependency that must be applied between source and target resource groups.

Types of dependencies

You can configure the following types of dependencies between the source and target resource groups.

Start and Stop dependencies

In Start and Stop dependencies, the resource groups have parent-child relationship. The following typesof Start and Stop dependencies can be configured:

STARTAFTERThe STARTAFTER dependency ensures that the source resource group is started only when thetarget resource groups are in the Online state. The STARTAFTER dependency provides thefollowing behavior scheme:

© Copyright IBM Corp. 2017, 2018 41

The STARTAFTER dependency defines a start sequence for the resource group A and resourcegroup B. To start the source resource group A, the target resource group B must be started first.

Note: The resource group A and resource group B can be started on different nodes.The following rules apply to the STARTAFTER dependency:v The STARTAFTER dependency must not conflict with an existing DEPENDSON dependency.v The STARTAFTER dependency does not define the location dependency between the managed

resource groups. If you must define a location dependency, you must create an additionaldependency.

v If PowerHA SystemMirror must start the source resource group, PowerHA SystemMirror doesnot attempt to start the target resource groups internally. You must start the target resourcegroups before you attempt to bring the source resource group to an Online state, otherwisePowerHA SystemMirror displays an error.

v If PowerHA SystemMirror must stop the target resource group, PowerHA SystemMirror doesnot attempt to stop the source resource groups internally. You must stop the source resourcegroups before you attempt to bring the target resource group to an Offline state, otherwisePowerHA SystemMirror displays an error.

v If the target resource group fails, the source resource group is not stopped.

STOPAFTERThe STOPAFTER dependency ensures that the source resource group can be stopped only whenthe target resource groups are stopped. The STOPAFTER dependency provides the followingbehavior scheme:

The resource group A is not stopped unless the target resource group is brought offline.

Note: The STOPAFTER dependency must not provide a start dependency and a force-downdependency.

Figure 7. STARTAFTER dependency

Figure 8. STOPAFTER dependency

42 PowerHA SystemMirror for Linux Version 7.2.2

DEPENDSONThe DEPENDSON dependency ensures that the source resource group is stopped when the targetresource groups fail. The DEPENDSON dependency also includes an implicit collocation betweenthe source and target resource groups. The DEPENDSON dependency provides the followingbehavior scheme:The resource group A depends on the functionality of resource group B. Hence, the resource

group A cannot function without the resource group B.

The DEPENDSON dependency provides the following behavior schemes:v By using the start behavior scheme, the DEPENDSON dependency defines a start sequence for

the resources group A and resource group B with an implicit collocation dependency: To startthe source resource group A, the target resource group B must be started first. After the targetresource group B is in the Online state, the source resource group A can be started on the samenode.

v By using the stop behavior scheme, the DEPENDSON dependency defines a stop sequence forresource groups A and resource group B: To stop the target resource group B, the sourceresource group A must be stopped first. After the source resource group A is offline, the targetresource group B can be stopped.

DEPENDSONANYThe behavior of the DEPENDSONANY dependency is identical to the DEPENDSON dependencyexcept that it does not provide the collocated constraint for the start sequence. Therefore, sourceand target resource groups can be started either on the same node or on different nodes. TheDEPENDSONANY dependency provides the following behavior schemes:

v By using the start behavior, the DEPENDSONANY dependency defines a start sequence for theresource group A and resource group B without using the location dependency. The targetresource group B must be started first to start the source resource group A. After the targetresource group B is in the Online state, the source resource group A can be started.

Figure 9. DEPENDSON dependency

Figure 10. DEPENDSONANY dependency

Configuring dependencies between resource groups 43

v By using the stop behavior, the DEPENDSONANY dependency defines a stop sequence for theresource group A and resource group B. The source resource group A must be stopped first tostop the target resource group B. After the source resource group A is offline, the targetresource group B can be stopped.

FORCEDDOWNBYThe FORCEDDOWNBY dependency ensures that the source resource group is moved to theOffline state if the target resource group is in the Offline state.

The FORCEDDOWNBY dependency provides the following behavior scheme:

The resource group A must be moved to an Offline state when the target resource group movesto an Offline state unexpectedly. The resource group A and resource group B can be stoppedsimultaneously.

Note: The FORCEDDOWNBY dependency does not provide a start behavior scheme and a stopbehavior scheme.

Location dependencies

The Location dependencies ensures that the source resource group and target resource groups are in anOnline state on either same node or different nodes. The location dependencies are of the following types:

COLLOCATEDThe COLLOCATED dependency ensures that the source resource group and the correspondingtarget resource groups are in an Online state on the same node. The COLLOCATED dependencyprovides the following behavior scheme:

The COLLOCATED dependency ensure that when the resource group A starts, move to anOnline state only on the node where the resource group B is already running.

Figure 11. FORCEDDOWNBY dependency

Figure 12. COLLOCATED dependency

44 PowerHA SystemMirror for Linux Version 7.2.2

ANTICOLLOCATEDThe ANTICOLLOCATED dependency ensures that the source resource group and thecorresponding target resource groups are in the Online state on different nodes. TheANTICOLLOCATED dependency provides the following behavior scheme:

The ANTICOLLOCATED dependency ensures that the resource group A can be started only on anode, where the resource group B is already running.

Note: If the target resource group B cannot be started on a node other than the source node, thetarget resource group B can be started where the source resource group A is already running.

ISSTARTABLEThe ISSTARTABLE dependency provides the following behavior scheme:

The ISSTARTABLE dependency indicates that the resource group A can be placed only on a nodewhere the resource group B can be started when the resources group A and resource group B arein the Online state.

Figure 13. ANTICOLLOCATED dependency

Figure 14. ISSTARTABLE dependency

Configuring dependencies between resource groups 45

46 PowerHA SystemMirror for Linux Version 7.2.2

Troubleshooting PowerHA SystemMirror for Linux

Use this information to troubleshoot the PowerHA SystemMirror software for the Linux operatingsystem.

Troubleshooting PowerHA SystemMirror clustersTypically, a functioning PowerHA SystemMirror cluster requires minimal intervention. If a problem doesoccur, diagnostic and recovery skills are essential. Therefore, troubleshooting requires that you identifythe problem quickly and apply your understanding of the PowerHA SystemMirror software to restore thecluster to be completely operational.

Tuning a cluster for optimal performance, can help you to avoid common PowerHA SystemMirror clusterproblems.

Using PowerHA SystemMirror cluster log filesYou can troubleshoot a cluster by using the PowerHA SystemMirror cluster log files.

Reviewing cluster message log files

PowerHA SystemMirror writes messages that it generates to the system console and to several log files.Each log file contains a different subset of messages that are generated by the PowerHA SystemMirrorsoftware. When viewed as a group, the log files provide a detailed view of all cluster activity.

The following list describes the log files into which the PowerHA SystemMirror software writes messagesand the types of cluster messages they contain. The list also provides recommendations for usingdifferent log files.

Note: Only default directories are listed in the following table. If you redirect any log files, you mustcheck the appropriate location.

system log messagesContains time-stamped and formatted messages from all subsystems, including scripts anddaemons.

The log file name is /var/log/messages.

/var/pha/log/clcomd/clcomddiag.logContains time-stamped, formatted, diagnostic messages generated by the clcomd daemon.

Recommended Use: Information in this file is for IBM Support personnel.

/var/pha/log/hacmp.outContains time-stamped, formatted messages generated by PowerHA SystemMirror events.

/var/pha/log/clmgr/clutils.logContains information about the date, time, commands generated by using the clmgr command.

/var/pha/log/appmon.logContains information about the exit code and failure counts generated when you run theapplication monitor script.

Using the Linux log collection utilityYou can use the clsnap Linux command to collect data from a PowerHA SystemMirror cluster.

© Copyright IBM Corp. 2017, 2018 47

|||

clsnap command

The clsnap command collects the log files from all nodes of the cluster and saves the log data as acompressed .tar file in the /tmp/ibmsupt/hacmp directory for every cluster node. The logs of the clsnapcommand are created in the /tmp/ibmsupt/hacmp.snap.log file. The clsnap command performs thefollowing operations:v To collect log information from the local node, enter the following command:

clsnap -L

v To collect log information from the specified node, enter the following command:clsnap -n <node-name>

v To collect log information in a specified directory, enter the following command:clsnap –d <dir>

Solving common problemsYou can interact with PowerHA SystemMirror by using relevant clmgr command arguments. If an erroroccurs when you run the clmgr command, and if the error reported on the console is not clear, you mustcheck the /var/pha/log/clmgr/clutils.log log file.

PowerHA SystemMirror startup issuesThese topics describe potential PowerHA SystemMirror startup issues.

PowerHA SystemMirror failed to create the clusterThis topic discusses a possible cause for failure in cluster creation.

Problem

Cluster creation fails with a reason that it cannot connect to other cluster nodes.

Solution

A number of situations can cause this problem. Review the following information to identify a possiblesolution for this problem:v Check whether the IPv4 address entry exists in the /etc/cluster/rhosts file on all nodes of cluster.v If any entry is recently added or updated, refresh the Cluster Communication Daemon subsystem

(clcomd) by using the refresh -s clcomd command on all nodes of a cluster and then create the clusteragain.

v Check whether the Cluster Communications (clcomd) daemon process is running by using the ps -aef| grep clcomd command. If the clcomd daemon process is not listed in the process table, start theclcomd daemon manually by using the startsrc -s clcomd command.

v Check and ensure that other nodes are not part of any cluster.v Check that the hostname of the different nodes are there in the /etc/hosts file.v Check the Firewall status by running the systemctl status firewalld.service command. Firewall should

be disabled and the following ports must be opened:657/tcp16191/tcp657/udp12143/udp12347/udp12348/udp

– To open the port, enter the following command:firewall-cmd --permanent --zone=public --add-port=<port no>/<protocol>

48 PowerHA SystemMirror for Linux Version 7.2.2

|

|||

|||

|

|

||

||||||

|

|

Example: firewall-cmd --permanent --zone=public --add-port=16191/tcp– To list all open ports on a firewall, enter the following command: firewall-cmd --list-all

v Check whether the subnet configuration is correct on all interfaces for all nodes that are part of thecluster. The subnet of all the interfaces on the same node must be different. For example, the subneteth1, eth2 and eth3 of node1 must be 10.10.10.0/24, 10.10.11.0/24, 10.10.12.0/24.

Highly available resource failed to startThis topic examines a situation where a highly available resource fails to start.

Problem

A highly available resource fails to start.

Solution

You can review the following information to identify possible solutions for this problem:1. Check for messages that are generated when you run the StartCommand command for that resource

in the system log file (/var/log/messages), and in the ps –ef process table. If the StartCommandcommand is not run, proceed with next step, otherwise investigate why the application is online.

2. Either more than half of the nodes in the cluster are online or exactly half of the nodes are online andthe tiebreaker function is reserved. If less than half of the nodes are online, start the additional nodes.If exactly half of the nodes are online, check the attribute of the active tiebreaker. You can check theactive tiebreaker by running the clmgr query cluster command.

3. In some scenarios, a resource moves to the Sacrificed state when the PowerHA SystemMirror mightnot find a placement for the resource. PowerHA SystemMirror cannot start this resource because thereis no single node on which this resource might be started.To resolve this problem, ensure that the Network, which the Service IP resource in the resource groupuses, does have at least one of the nodes included. This node must be part of the resource groupnodelist. To check whether different nodes are assigned on a network, run the clmgr query networkcommand. If different nodes are assigned on the network, delete the network and add it again withcorrect entries of the nodes by using the clmgr add interface command. To display the detailedinformation about resource groups and the resources, run the clRGinfo -e command. This solutionmight resolve the issues.If the application resource is in Sacrificed state, check whether the resource groups are not on thesame node when they have AntiCollocated relationship between them. To resolve this issue, moveone of the resource groups to the other node.

Resource group does not startThis topic examines a situation where a resource group fails to start.

Problem

A resource group does not start.

Solution

A resource group is composed of several resources. If none of resources of the resource group is starting,perform the following steps:1. Identify which of the resources must start first by evaluating the relationship status between them.2. Check all requests against the resource group, and evaluate all relationships in which the resource

group is defined as a source.

PowerHA SystemMirror goes to Warning StateThis topic discusses a possible cause for a cluster to be in the Warning state.

Troubleshooting PowerHA SystemMirror 49

|

|

|||

Problem

A cluster goes to the Warning state.

Solution

If a cluster goes in the Warning state, you can check the following scenario and resolve it.v All the nodes of a cluster are not in the Online state.v All the nodes of a cluster are not reachable.

PowerHA SystemMirror fails to add nodeThis topic discusses a possible cause for failure while adding a node to the cluster.

Problem

Unable to add a node to the cluster. For example,2632-077 The following problems were detected while adding nodes to the domain.As a result, no nodes will be added to the domain.rhel72node2: 2632-068This node has the same internal identifier as rhel72node1 and cannot be included in the domain definition.

Solution

This error occurs if you use the cloned operating system images. To fix this issue, you must reset thecluster configuration by running the /opt/rsct/install/bin/recfgct -F command on the node that is specifiedin the error message. This action resets the RSCT node ID.

PowerHA SystemMirror disk issuesThese topics describe potential disk and file system issues.

PowerHA SystemMirror failed to configure the disk tiebreakerThis topic describes the situations where PowerHA SystemMirror failed to configure the disk tiebreaker.

Problem

PowerHA SystemMirror is unable to configure the disk tiebreaker.

Solution

Check if the disk is shared among all the nodes by comparing the Universally Unique Identifier (UUID)of the disk on all the nodes of the cluster. You can additionally perform the following steps:v Use the lsscsi command to list the SCSI devices in the system.v Check the /usr/lib/udev/scsi_id -gu <SCSI_disk#> file for all nodes to check the Disk ID attribute,

and ensure that the Disk ID attribute is same across all nodes of the cluster

PowerHA SystemMirror failed to configure the disk heartbeatThis topic describes the situations where PowerHA SystemMirror failed to configure the disk heartbeat.

Problem

PowerHA SystemMirror is unable to configure the disk heartbeat.

50 PowerHA SystemMirror for Linux Version 7.2.2

Solution

Check whether the disk is a shared disk among all nodes by comparing the Port VLAN Identification(PVID) of the disk on all nodes of the cluster. You must ensure that the PVID of that disk is same on allnodes.

PowerHA SystemMirror is not able to detect common diskThis topic describes the situations where PowerHA SystemMirror software is not able to detect the shareddisk across nodes of the cluster.

Problem

PowerHA SystemMirror software is not able to detect the shared disk across nodes of the cluster.

Solutionv Use the lsrsrc IBM.Disk command to view the common disk between two nodes and ensure that the

cluster is present. The lsrsrc command works only if the cluster is located in the common disk.v You must choose the DeviceName attribute that corresponds to the nodelist attribute, which has the total

number for nodes of the cluster.

PowerHA SystemMirror resource and resource group issuesThese topics describe potential resource and resource group issues.

Highly available resources are in the Failed offline stateThis topic examines situations when highly available resources fail.

Problem

PowerHA SystemMirror sets the resource to the Failed Offline state because the previous attempt to startthe resources failed.

Solution

This problem has two possible causes:

Cluster node is not onlineIf a cluster node is not online, all resources that are defined on the node have Failed Offlinestate. In this case, the problem is not related to the resource but to a node.

Monitor command of resource returns failureTo determine the issue in this case, run the Monitor command by using the following steps:1. Run the Monitor command.2. Get the return code by entering the echo $? command.

If the return code is 3 (Failed Offline state), determine the reason for the Monitor commandfailure. To investigate this problem, check the system log files for all messages that are located inthe /var/log/messages file that indicates a timeout for this resource or check whether all thescripts or binary file that are internally started by using the start scripts, stop scripts, or monitorscripts are present at the correct path and are having the required permissions.

Finally, resource can be reset by using the clmgr command:clmgr reset application <application> [NODE=<node_name>]clmgr reset service_ip <service_ip> [NODE=<node_name>]clmgr reset file_system <file_system> [NODE=<node_name>]

After this step, PowerHA SystemMirror initially stops this resource, and if the correspondingresource group is in an Online state, it starts that resource group again.

Troubleshooting PowerHA SystemMirror 51

|||

Resource group has Failed Offline stateThis topic examines situations where a resource group is in the Failed Offline state.

Problem

PowerHA SystemMirror sets the resource group to the Failed Offline state because previous attempt tostart the resource group failed.

Solution

If the resources of a resource group do not start and the resource group shows Failed Offline state, itindicates that the binder was unable to find a placement for the resources and now the resource groupshows Sacrificed state.

Also, check whether other resources have Failed Offline state and set the resource Online or Offlinestate.

Resource group has Stuck Online stateThis topic examines situations where a resource group is in the Stuck Online state.

Problem

PowerHA SystemMirror shows that the state of the resource group is Stuck Online.

Solution

The possible causes follow:v The Monitor command of a resource group returns with error code. This can be checked by running

the Monitor command manually and checking the return code by entering the echo $? option. If returncode is 4 (Stuck Online state), determine the reason for the Monitor command failure.

v In case where the PowerHA SystemMirror cannot stop a resource, it is set to the Stuck Online state.This issue occurs if you run the Stop script for this resource against that resource, the failed to bringthe resource in an Offline state.

This error requires manual intervention and the stop script must be corrected. The state of the resource isevaluated as Offline when you run the monitor command again or reset the resource group by using thefollowing command.clmgr reset resource_group <resource_group>

In case the resource group state is still Stuck Online after stop script is corrected, then reset the resourcegroup.

PowerHA SystemMirror Fallover issuesThis topic describes the potential fallover issues.

PowerHA SystemMirror fallover is not triggered after a node crash or reboot

Problem

PowerHA SystemMirror fails to selectively move the affected resource group to another cluster nodewhen a node crashes or restarts.

52 PowerHA SystemMirror for Linux Version 7.2.2

||

Solution

Check if either more than half of the nodes in the cluster are online or exactly half of the nodes are onlineand the tiebreaker is reserved. If less than half of the nodes are online, start additional nodes. If exactlyhalf of the nodes are online, check the attribute of the active tiebreaker.

The possible causes follow:v Either more than half of the nodes in the cluster are online or exactly half of the nodes are online and

the tiebreaker is reserved. If less than half of the nodes are online, start additional nodes. If exactly halfof the nodes are online, check the attribute of the active tiebreaker by using the clmgr query clustercommand.

v The split policy must be set as None and if it is set as Manual, the user intervention is required forfallover after node reboots.

v The policy of resource group must be FBHN, which can be checked by using the clmgr query rgcommand.

PowerHA SystemMirror additional issuesThis topics describes any additional issues of PowerHA SystemMirror.

Application is not working

Problem

The start, stop, or monitor scripts are not working properly.

Solutionv Ensure that the path and arguments of all the scripts are correct.v Try to manually run the start script and redirect all output to a file and observe the behavior of scripts.

For example, run the /usr/bin/application >/outputfile script.v Ensure that all the return codes of start, stop, and monitor scripts are returning correct values. The

monitor script shall return the value 1 for ONLINE or 2 for OFFLINE status.v The application process must be detached from the calling script by using either of the following

methods.– Redirect all file handles to a file and start the application process in the background, for example

/path/to/application >/outputfile 2>&1 &

– Create a wrapper application that uses the setsid() C-function to detach the application process fromthe calling StartScript.

Troubleshooting PowerHA SystemMirror 53

54 PowerHA SystemMirror for Linux Version 7.2.2

PowerHA SystemMirror graphical user interface (GUI)

In PowerHA SystemMirror Version 7.2.2 for Linux, you can use a graphical user interface (GUI) tomonitor your cluster environment.

The PowerHA SystemMirror GUI provides the following advantages over the PowerHA SystemMirrorcommand line:v Monitor the status for all clusters, sites, nodes, and resource groups in your environment in a single,

unified view. If any clusters are experiencing a problem, those clusters are always displayed at the topof the view, so you will be sure to see them.

v Group clusters into zones to better organize your enterprise. Zones can be used to restrict user accessto clusters, and can be based on function, geographical location, customer, or any other characteristicsthat make sense for your business.

v Management features are provided, allowing authorized users to perform actions on their clusters, suchas starting and stopping cluster services and resource groups, move resource groups to new nodes,create new clusters, create new resource groups with resources, and more.

v User permissions provide security controls, so that users can be restricted to only the capabilities thatthey are specifically authorized to have.

v Scan event summaries and read a detailed description for each event. If the event occurred because ofan error or issue in your environment, you can read suggested solutions to fix the problem.

v Search and compare log files side by side. Some commonly used log files are displayed in an easier toread format to make identifying important information easier than ever before.

v View properties for a cluster such as the PowerHA SystemMirror version and name of sites and nodes.

Planning for PowerHA SystemMirror GUIBefore you can install PowerHA SystemMirror GUI, your environment must meet certain requirements.

Linux operating system requirements

The nodes in the clusters on where you are running the install scripts must be running one of thefollowing versions of the Linux operating system:v SUSE Linux Enterprise Server (SLES) 12 SP1 (64-bit)v SLES 12 SP2 (64-bit)v SLES 12 SP3 (64-bit)v SLES for SAP 12 SP1 (64-bit)v SLES for SAP 12 SP2 (64-bit)v SLES for SAP 12 SP3 (64-bit)v Red Hat Enterprise Linux (RHEL) 7.2 (64-bit)v Red Hat Enterprise Linux (RHEL) 7.3 (64-bit)v RHEL 7.4 (64-bit)

Note:

v Before using the PowerHA SystemMirror GUI, you must install and configure secure shell (SSH) oneach node.

v OpenSSL and OpenSSH must be installed on the system that is used as the PowerHA SystemMirrorGUI server.

© Copyright IBM Corp. 2017, 2018 55

You can install the PowerHA SystemMirror GUI on Linux operating system by using the installPHAGUIscript. For more information, see “Installing PowerHA SystemMirror GUI” on page 57.

Planning for SSH

OpenSSL and OpenSSH must be installed on the system that is used at the PowerHA SystemMirror GUIserver. OpenSSH is used to create secure communication between PowerHA SystemMirror GUI serverand nodes in the cluster. OpenSSH is needed only for the cluster addition process. After completion ofcluster addition, OpenSSH is no longer used for communication between the server and the agents. Formore information, see the OpenSSL website and the OpenSSH website.

The SSH File Transfer Protocol (SFTP) subsystem must be configured to work between the PowerHASystemMirror GUI server and nodes in the cluster. You can verify that the SFTP subsystem is configuredcorrectly in the /etc/ssh/sshd_config file. Verify that following path is correct:Subsystem sftp /usr/libexec/openssh/sftp-server

If the path is not correct, you must enter the correct path in the /etc/ssh/sshd_config file, and thenrestart the sshd subsystem.

PowerHA SystemMirror support both the key-based and password-based form of SSH authentication.PowerHA SystemMirror for Linux disable the password-based authentication after SSH installation.Hence, only key-based authentication is available on the system. To enable the password authentication,you must change the configuration that allows the password authentication. To enable the passwordauthentication in SSH, edit the /etc/ssh/sshd_config file by adding the following line:PasswordAuthentication yes

After you add the PasswordAuthentication yes line to the /etc/ssh/sshd_config file, restart SSH byentering the following command:systemctl restart sshd

After you enable the password authentication on your system, check for the root authority access to adda cluster to PowerHA GUI. If the root access is disabled by SSH, edit the /etc/ssh/sshd_config file byadding the following line:PermitRootLogin yes

After you add the PermitRootLogin yes line to the /etc/ssh/sshd_config file, restart SSH by entering thefollowing command:systemctl restart sshd

You will need the following information to properly configure SSH for the PowerHA SystemMirror GUI:

Note: You need to connect to only one node in the cluster. After the node is connected, the PowerHASystemMirror GUI automatically adds all other nodes in the cluster.v Host name or IP address of at least one node of the cluster.v User ID and corresponding password which will be used for SSH authentication on that node.v SSH password or SSH key location.

Supported web browsers

PowerHA SystemMirror GUI is supported in the following web browsers:v Google Chrome Version 57, or laterv Firefox Version 52, or later

56 PowerHA SystemMirror for Linux Version 7.2.2

|||||

|

||

|

|||

|

||

|

|

Installing PowerHA SystemMirror GUIBefore you install the PowerHA SystemMirror GUI in your Linux environments, you must ensure that allprerequisites are met.

Prerequisites

The following prerequisites must be met before you install the PowerHA SystemMirror GUI on a Linuxsystem:v The PowerHA SystemMirror package must be installed on your system.v The KSH93 scripting package is required on each SUSE Linux Enterprise Server and Red Hat

Enterprise Linux (RHEL) system

Installing the PowerHA SystemMirror GUI

The PowerHA SystemMirror GUI software is delivered as a single tar file which includes all RPMs andinstallation scripts. The PowerHA SystemMirror GUI installation script is available in thePHA7221Linux64/Gui/installPHAGUI directory. The installation script includes of all steps for installing thePowerHA SystemMirror GUI software such as checking for errors and other setup.1. Untar the tar file to access the installation script.2. To install the agent and server, run the following script:

./installPHAGUI

3. To install the agent only, run the following script:./installPHAGUI –a -c

Note: If any previous installation of the agent exists, uninstall it by running the following scriptbefore you install the new version:./installPHAGUI -u -a -c

4. To install the server only, run the following script:./installPHAGUI –s -c

Notes:

v You must run the ./installPHAGUI command only on one node of a cluster or multi-clusterenvironment that will be designated as server for the PowerHA SystemMirror GUI.

v Before you install the PowerHA SystemMirror GUI on the SUSE or RHEL environment, enter thefollowing commands on all nodes of cluster:– Change the Password Authentication parameter to yes in the /etc/ssh/sshd_config file.– Restart the sshd daemon by entering the rcsshd restart command.

After the PowerHA SystemMirror GUI is installed, you can log in to the PowerHA SystemMirror GUIfrom a web browser. In hybrid environments, if you want to support the AIX clusters on Linux server,you must run the /usr/es/sbin/cluster/ui/server/bin/smuiinst.ksh command to complete the installationprocess. The smuiinst.ksh command automatically downloads and installs the remaining files that arerequired to complete the PowerHA SystemMirror GUI installation process. These downloaded files arenot shipped in the RPMs because the files are licensed under the General Public License (GPL).

The PowerHA SystemMirror GUI server must have internet access or an HTTP proxy that is configuredto allow access to the internet to run the smuiinst.ksh command. If you are using an HTTP proxy, youmust run the smuiinst.ksh -p command to specify the proxy information, or you must specify the proxyinformation by using the http_proxy environment variable. If the PowerHA SystemMirror GUI serverdoes not have internet access, complete the following steps:

PowerHA SystemMirror GUI 57

1. Copy the smuiinst.ksh file from the PowerHA SystemMirror GUI server to a system that is runningthe UNIX compatible operating system that has internet access.

2. Run the smuiinst.ksh -d /directory command where /directory is the location where you want to thedownload the files. For example, /smuiinst.ksh –d /tmp/smui_rpms.

3. Copy the downloaded files (/tmp/smui_rpms) to a directory on the PowerHA SystemMirror GUIserver.

4. From the PowerHA SystemMirror GUI server, run the smuiinst.ksh -i /directory command where/directory is the location where you copied the downloaded files (/tmp/smui_rpms).

Logging in to the PowerHA SystemMirror GUIAfter you install the PowerHA SystemMirror GUI, you can log in to the PowerHA SystemMirror GUIfrom a web browser.

To log in to the PowerHA SystemMirror GUI, complete the following steps:1. Open a supported web browser, and enter https://HostName:8080/#/login, where HostName is the

system on which you installed the cluster.es.smui.server RPM.2. On the login page, enter the user name and password and click Log In. You can use the existing user

names and passwords that exist on the system to login.

Note: The first time you log in to the PowerHA SystemMirror GUI, you must add clusters to the GUI orcreate new clusters.

To add existing clusters to the PowerHA SystemMirror GUI, complete the following steps:

1. In the navigation pane, click the

icon.2. Select Add cluster.3. Complete all required information.4. Click Discover clusters.

To create new clusters for the PowerHA SystemMirror GUI, complete the following steps:

1. In the navigation pane, click the

icon.2. Select Create cluster.3. Complete all required information.4. Click Complete.

Navigating the PowerHA SystemMirror GUIThe PowerHA SystemMirror graphical user interface (GUI) provides you with a web browser interfacethat can monitor your PowerHA SystemMirror environment.

Health summary

In the PowerHA SystemMirror GUI, you can quickly view all events for a cluster in your environment.The following figure identifies the different areas of the PowerHA SystemMirror GUI that are used toview events and status.

58 PowerHA SystemMirror for Linux Version 7.2.2

Navigation paneThis area displays all the zones, clusters, sites, nodes, and resource groups in a hierarchy that wasdiscovered by the PowerHA SystemMirror GUI. You can click to view resources for each cluster.

Note: The clusters are displayed in alphabetic order. However, any clusters that are in a Criticalor Warning state are listed at the top of the list.

Note: The Network tab is not included for the PowerHA SystemMirror for Linux Version 7.2.2.

Health Summary This menu provides cluster administrative features for the selected item. You can select Add

Cluster, Create Zone, Remove Cluster, or Create Cluster from the Health Summarymenu.

Scoreboard This area displays the number of zones, clusters, nodes, and resource groups that are in Critical,Warning, or Maintenance state. You can click Critical, Warning, or Maintenance to view all themessages for a specified resource. For example, in Figure 1, there are 2 resource groups identified.If the warning icon was highlighted and you clicked the warning icon, all messages (critical,warning, and normal) for the 2 resource groups would be displayed.

Event filterIn this area, you can click the icons to display all events in your environment that correspond toa specific state. You can also search for specific event names.

Event timeline

Figure 15. Health summary

PowerHA SystemMirror GUI 59

This area displays events across a timeline of when the event occurred. This area allows you toview the progression of events that lead to a problem. You can zoom in and out of the time rangeby using the + or – keys or by using the mouse scroll wheel.

Event listThis area displays the name of the event, the time when each event occurred, and a description ofthe event. The information that is displayed in this area corresponds to the events you selectedfrom the event timeline area. The most recent event that occurred is displayed first. You can clickthis area to display more detailed information about the event such as possible causes andsuggested actions.

Action MenuThis area displays the following menus options:

User ManagementPowerHA SystemMirror GUI allows an admin to create and manage users by using UserManagement menu. The admin can assign built-in roles to new users.

Note: You can only add user names that are defined on the host running the PowerHASystemMirror GUI server.

Role ManagementThe Role Management tab displays information about available roles for each user. Anadmin can create custom roles and provide permission to different users. PowerHASystemMirror GUI provides the following roles:v ha_root

v ha_mon

v ha_op

v ha_admin

Zone ManagementYou can create zones, which are groups of clusters. An admin can create zones and assignany number of clusters to a zone. You can also add new zones or edit existing zones.

View Activity LogYou can view information about all activities performed in the PowerHA SystemMirrorGUI that resulted in a change by using the View Activity Log tab. This view providesvarious filters to search for specific activities for the cluster, roles, zone, or usermanagement changes.

Troubleshooting PowerHA SystemMirror GUIYou can view log files to help you troubleshoot PowerHA SystemMirror GUI.

The information in this section is only a reference guide for different techniques and log files you may beable to use to troubleshoot problems with thePowerHA SystemMirror GUI. You should always contactIBM support if you are uncertain about the information here or how to solve your problem.

Log files

You can use the following log files to troubleshoot PowerHA SystemMirror GUI:

smui-server.logThis log file is located in the /usr/es/sbin/cluster/ui/server/logs/ directory. Thesmui-server.log file contains information about the PowerHA SystemMirror GUI server.

60 PowerHA SystemMirror for Linux Version 7.2.2

smui-agent.logThis log file is located in the /usr/es/sbin/cluster/ui/agent/logs/ directory. The smui-agent.logfile contains information about the agent that is installed on each PowerHA SystemMirror node.

notify-event.logThis log file is located in the /usr/es/sbin/cluster/ui/agent/logs/ directory. Thenotify-event.log file contains information about all PowerHA SystemMirror events that are sentfrom the agent to the PowerHA SystemMirror server.

Problems logging in to PowerHA SystemMirror GUI

If you are experiencing problems logging in to the PowerHA SystemMirror GUI, complete the followingsteps:1. Check for issues in the /usr/es/sbin/cluster/ui/server/logs/smui-server.log file.2. Verify that the smuiauth command is installed correctly. Also, verify that the smuiauth command has

the correct permissions by running the ls -l command from the /usr/es/sbin/cluster/ui/server/node_modules/smui-server/lib/auth/smuiauth directory. An output that is similar to the followingexample is displayed when you run the ls -l command:-r-x------ 1 root system 21183 Aug 31 21:48

3. Verify that you can run the smuiauth command by running the smuiauth -h command.4. Verify that the pluggable authentication module (PAM) framework is configured correctly by locating

the following lines in the /etc/pam.d/smuiauth file:RHEL:auth requisite pam_nologin.soauth required pam_env.soauth optional pam_gnome_keyring.soauth required pam_unix.so try_first_passaccount requisite pam_nologin.soaccount required pam_unix.so try_first_pass

SUSE:auth requisite pam_nologin.soauth include common-authaccount requisite pam_nologin.soaccount include common-account

Note: The PAM configuration occurs when you install the PowerHA SystemMirror GUI server.

Problem adding clusters to the PowerHA SystemMirror GUI

If you are not able to add clusters to the PowerHA SystemMirror GUI, complete the following steps:1. Check for issues in the /usr/es/sbin/cluster/ui/server/logs/smui-server.log file.

a. If sftp-related signatures exist in the log file, such as Received exit code 127 while establishingSFTP session, a problem exists with the SSH communication between the PowerHA SystemMirrorGUI server and the cluster you are trying to add.

b. From the command line, verify that you can connect to the target system by using SSH FileTransfer Protocol (SFTP). If you cannot connect, verify that the daemon is running on thePowerHA SystemMirror GUI server and the target node by running the ps –ef | grep –w sshd |grep –v grep command. You can also check the SFTP subsystem configuration in the/etc/ssh/sshd_config file and verify that following path is correct:Subsystem sftp /usr/libexec/openssh/sftp-server

If the path is not correct, you must enter the correct path in the /etc/ssh/sshd_config file, andthen restart the sshd subsystem.

2. Check for issues in the /usr/es/sbin/cluster/ui/agent/logs/agent_deploy.log file on the targetcluster.

PowerHA SystemMirror GUI 61

3. Check for issues in the /usr/es/sbin/cluster/ui/agent/logs/agent_distribution.log file on thetarget cluster.

The PowerHA SystemMirror GUI is not updating status

If the PowerHA SystemMirror GUI is not updating the cluster status or displaying new events, completethe following steps:1. Check for issues in the /usr/es/sbin/cluster/ui/server/logs/smui-server.log file.2. Check for issues in the /usr/es/sbin/cluster/ui/agent/logs/smui-agent.log file. If certificate-related

problem exists in the log file, the certificate on the target cluster and the certificate on the server donot match. An example of a certificate error follows:WebSocket server - Agent authentication failed, remoteAddress:::ffff:10.40.20.186, Reason:SELF_SIGNED_CERT_IN_CHAIN

62 PowerHA SystemMirror for Linux Version 7.2.2

Smart Assist for PowerHA SystemMirror

Smart Assist manages a collection of PowerHA SystemMirror components that you identify to support aparticular application. You can view these collections of PowerHA SystemMirror components as a singleentity, and in PowerHA SystemMirror that entity is represented by an application name.

PowerHA SystemMirror for SAP HANAThe high availability concepts of SAP HANA reduces the downtime of a system in case of any softwareor hardware failures.

The SAP HANA high availability policy defines all SAP HANA components as resources and starts orstops them in a pre-defined sequence to provide high availability for your SAP HANA system. The SAPHANA components must be specified as an automated resources in the PowerHA SystemMirror by usingSmart Assist.

The following components of the software stack in an SAP HANA installation must be highly available:

Primary and secondary host with the following processes on each host:

v The hdbdaemon daemon manages the following subprocesses:– hdbindexserver– hdbnameserver– hdbxsengine– hdbwebdispatcher– hdbcompileserver– hdbpreprocessor

v The sapstartsrv start, stop, and monitor processes for the hdbdaemon daemon.v IP address to access the primary host

Planning for SAP HANA

SAP HANA High Availability feature is supported for the following configuration in PowerHASystemMirror Version 7.2.2 for Linux:v SAP HANA Version 2.0v Two node cluster deploymentv You must install the SAP HANA database before configuring resources by using the Smart Assist

application.v The user-defined resources such as Persistent IP can be deleted after configuring the Smart Assist

policies.v You must configure the NFS tiebreaker by using the clmgr command.

Prerequisites

Consider the following SAP HANA prerequisites for the System Replication scenario having data preload:v The SAP HANA software version of the secondary system must be equal to the primary system.v The secondary system must have the same SAP system ID (SID) and same instance number as the

primary system.v System replication between two systems on the same host is not supported.

© Copyright IBM Corp. 2017, 2018 63

v System Replication must be set up before configuring the Smart Assist application by using the clmgrcommand.

v The required ports must be available. The same <instance number> is used for the primary andsecondary systems. The <instance number>+1 must be free on both systems because this port range isused for system replication communication.

v An initial data backup or snapshot must be performed on the primary system before you activate thesystem replication.

v The SAP HANA database must be in an offline state before the Smart Assist policy is activated.Otherwise, the database stops if the Smart Assist policy is activated.

v You must ensure that the IP address that is used for the service IP label is deactivated, if it wasmanually added for testing, before you activate the new Smart Assist policy.

v You must have PowerHA SystemMirror Version 7.2.2 for Linux installed on your system beforeconfiguring the Smart Assist application. If you are planning SAP HANA workload on SLES12 SP2onwards (SLES or SLES for SAP), enter the following command on the cluster nodes before configuringthe Smart Assist application:– To set service tasks property:

systemctl set-property srcmstr.service TasksMax=13000

– To verify that Service tasks property has been modified successfully:systemctl show -p TasksMax srcmstr.service

Configuring the Smart Assist application for SAP HANA

You can invoke the SAP HANA Smart Assist Policy Setup Wizard by using the clmgr setup smart_assistcommand.clmgr setup smart_assist APPLICATION=SAP_HANA SID=TS2 INSTANCE=HDB02

The SAP HANA High Availability feature is defined for the SAP ID TS2 and for the instance name asHDB02.

The SAP ID TS2 and the instance name HDB02 are provided during the SAP HANA installation.

To configure a SAP HANA Smart Assist, complete the following steps:1. From the command line, enter the clmgr setup smart_assist command.2. Enter the following values for different flags:

APPLICATIONEnter the name of application as SAP_HANA for the application you want to configure byusing the Smart Assist application.

SID Enter the SAP HANA system ID that you configured during the SAP installation.

INSTANCEThe instance name is used for the instance directory that contains all necessary files for SAPHANA.

SAP HANA high availability Smart Assist policy parameters

This following table provides a list of all parameters that must be specified for the SAP HANA highavailability policy.

64 PowerHA SystemMirror for Linux Version 7.2.2

Table 5. SAP HANA parameters

SI Number Parameter Description Value type Value (Example)

1 Enter the name of your peer domain cluster.

Note: Value harvesting or automatic discovery ofvalue is provided for this parameter. Harvest orautomatically discovering value of a parameteris done to minimize user efforts. For example,the Smart Assist application harvests or autodiscover values for parameter like cluster nameor cluster nodes which already exists on thenode.

Provide the name of an existing peer domaincluster. The peer domain cluster hosts the SAPresources that are configured by using thistemplate.

String C11

2 Enter the nodes where you want to automateSAP HANA.

Note: Value harvesting or automatic discovery ofvalue is provided for this parameter.

These nodes must be listed by the PowerHASystemMirror clmgr query node command forthe specified domain. You can use either thelong name or the short name for a node.PowerHA SystemMirror SAP HANA resource iscreated for each of the specified nodes. The firstnode in the list is used as primary instancenode. The node name is also used for theremoteHost parameter in the hdbnsutil commandargument to enable System Replication for thesecondary node.

List of values, value type foreach value: Hostname

Node 1

Node 2

3 Specify the virtual IPv4 address that clients useto connect to SAP HANA.

This virtual IPv4 address is used to reach SAPHANA, independent of the system that it isrunning on.

Value type: IP version 4address

172.19.15.15

4 Specify the netmask for the SAP HANA virtualIP address.

Enter the netmask for the subnet of the SAPHANA virtual IP address. An example, for anetmask is 255.255.255.0.

Value type: IP version 4address

255.255.255.0

5 Enter the network interface for the SAP HANAIP address.

The available network interface specifies towhich network interfaces the SAP HANA virtualIP address can be bound. For Linux, it is Eth0.The same network interface name must beavailable on all nodes where the SAP HANA isautomated.

Value type: String (plusadditional value checking)

Eth0

6 Specify the user name of the SAP instanceowner.

Note: Value harvesting or automatic discovery ofvalue is provided for this parameter.

The default user name of the SAP instanceowner is composed of the SAP SID (in lowercase) and the suffix adm.

Value type: String (plusadditional value checking)

Ts2adm

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 65

Table 5. SAP HANA parameters (continued)

SI Number Parameter Description Value type Value (Example)

7 Enter a specific prefix for all SAP HANAresources.

Note: This prefix is used for all PowerHASystemMirror resources that cover SAP HANA,for example, the SAP_HDB resource. You mightencode the SAP solution name. For example, PI,ECC or SCM, which results in a prefix as PI_HDB.

Value type: String SAP_HDB

8 Specify all site names of your SAP HANAnodes.

Note: Value harvesting or automatic discovery ofvalue is provided for this parameter.

The site names are used to enable SystemReplication for the SAP HANA instance. Youmust use the same order as the list of nodes. Anexample site name is dcsite1.

List of values, value type foreach value: String

HANA_SITE_1

HANA_SITE_2

9 Select the log replication mode for SAP HANASystem Replication process.

Note: Value harvesting or automatic discovery ofvalue is provided for this parameter.

The SAP HANA log replication mode specifieshow to send log information to the secondaryinstance. The modes synchronous (sync),synchronous in memory (syncmem), orasynchronous (async) can be set.

Value type: String sync

Closing the Smart Assist wizard

You can close the Smart Assist wizard by choosing one of the following options:1. Use the 0 option to close the wizard. The user input is saved by using the clmgr smart_assist

command. If all parameter values are specified correctly which is indicated by the overall parameterstatus OK, you are prompted for the Smart Assist policy activation.

2. Use the X option to cancel the wizard. Confirm your cancel request to quit the wizard without savingany changes. The parameter summary is not created and you cannot activate the policy. The Finishand Cancel options are available in the Overview page and also in each parameter pane. The twoactions are run independently from the page or pane in which the option is selected.

Activating the Smart Assist wizard

If you used the 0 option to close the wizard and if all parameter values are specified correctly withoverall parameter status OK, you can activate the policy.

Depending on the option that you select, one of the following actions is performed:

Yes, activate as new policyThe Smart Assist policy is activated as a new policy. If you chose this option, the followingcommand is invoked:clmgr add smart_assist APPLICATION =SAP_HANA SID=<value> INSTANCE=<instance value>

Yes, activate by updating currently active policyThe Smart Assist policy is activated by updating the currently active policy. If you chose thisoption, the following command is invoked:clmgr update smart_assist APPLICATION=SAP_HANA SID=<value> INSTANCE=<instance value>

66 PowerHA SystemMirror for Linux Version 7.2.2

No, save modifications and exitThe Smart Assist policy is not activated. Your modifications are saved and the wizard is closed.

No, return to parameter overviewThe Smart Assist policy is not activated. Your modifications are not saved and the wizarddisplays the Overview page.

SAP HANA operations

The following operations are performed by SAP HANA:

Setup Smart AssistConfigure smart assist policies.

Add Smart AssistActivates the Smart Assist policy. All existing resources are deleted.

Update Smart AssistUpdates the active policy without stopping any resource. All existing resources are eithermodified or kept unchanged. The new resources are added to the policy.

Modify Smart AssistModify the active policy and remove all the existing resources. All resources that are not deletedare not stopped.

Delete Smart AssistDeactivates the active policy. All existing resources are deleted.

Verifying the SAP HANA High Availability feature

You can verify the start and stop function of the SAP HANA High Availability feature if the installationruns successfully. You can also verify the failover scenario for planned and unplanned outages.

To verify the start and stop function of the SAP HANA High Availability feature, complete the followingsteps:v To start your SAP HANA system, enter the clmgr online resource_group ALL command.v To display the different resource groups created by the wizard and their corresponding state, enter the

clRGinfo command.v To stop your SAP HANA system, enter the clmgr offline resource_group ALL command.

PowerHA SystemMirror for SAP NetWeaverThe high availability concepts for a SAP system provides information about choosing the Smart Assistpolicy (SAP Central Services) depending on the planned SAP installation.

The high availability setup reduces the downtime of a SAP system in case of any software or hardwarefailures. The high availability solution for SAP uses PowerHA SystemMirror to automate all SAPcomponents. PowerHA SystemMirror detects failed components and restarts or initiates a failover. Thissetup also helps to reduce the operational complexity of a SAP environment and to avoid operator errorsresulting from this complexity. PowerHA SystemMirror supports high availability for ABAP and JAVASAP Central Services installations.

Planning for SAP NetWeaver

SAP NetWeaver High Availability feature is supported in the following configuration in PowerHASystemMirror Version 7.2.2 for Linux:v SAP NetWeaver Version 7.5v Two node cluster deployment

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 67

v The following SAP Central Services high availability installation setups through wizard:– ABAP (ASCS) High Availability– Java (SCS) High Availability– SAP AppServer-only High Availability

v You must install SAP NetWeaver before configuring resources by using Smart Assist.v User-defined resources such as Persistent IP can be deleted by configuring Smart Assist policies.v You must configure the NFS tiebreaker by using the clmgr command.

Prerequisites

Consider the following SAP NetWeaver prerequisites for configuration, which involves JAVA, ABAP, andApp-server:v The SAP system ID (SID) and instance number of SAP NetWeaver must be same on both cluster nodes

(primary and failover node).

Configuring SAP NetWeaver

You can start the Smart Assist Policy Setup wizard by using the clmgr setup smart_assist command.

Examples1. To configure the SAP ABAP (ASCS) HA solution, enter the following command:

clmgr setup smart_assist APPLICATION=SAP_ABAP SID=TS2 INSTANCE=SCS02

2. To configure the SAP JAVA (SCS) HA solution, enter the following command:clmgr setup smart_assist APPLICATION=SAP_JAVA SID=TS2 INSTANCE=SCS02

3. To configure the SAP APPSERVER HA solution, enter the following command:clmgr setup smart_assist APPLICATION=SAP_APPSERVER SID=TS2

SAP NetWeaver High Availability

SAP NetWeaver High Availability is defined for SAP ID (TS2 in above example) and instance name(SCS02 in above example). The SAP ID and the instance name are provided during SAP NetWeaverinstallation.

To setup SAP NetWeaver Smart Assist, complete the following steps:1. From the command line, run the clmgr setup smart_assist command.2. Enter the flag value as follows:

APPLICATIONEnter the name of an application to configure by using Smart Assist, for example, SAP_ABAP,SAP_JAVA, or SAP_APPSERVER.

SID Enter the SAP system ID configured during SAP installation.

INSTANCEInstance name is used for the instance directory that contains all necessary files for the SAPCentral Services instance.

SAP NetWeaver high availability Smart Assist policy parameters

This following tables provides a list of all parameters that must specified for the SAP NetWeaver highavailability policy.

68 PowerHA SystemMirror for Linux Version 7.2.2

Table 6. ABAP policy parameters

SI Number Parameter Description Value type Value (Example)

1 Enter the name of your peerdomain cluster.

String C11

2 Specify the user name of the SAPinstance owner.

String Ts2adm

3 Enter the prefix for all ABAPresources

String SAP_ABAP

4 Enter the nodes where you wantto automate your SAP CentralServices Instance for ABAP(ASCS).

List of values,value type foreach value:Hostname

Node1

Node2

5 Specify the virtual host name forthe Central Services Instance forABAP (ASCS).

Host name

6 Specify the virtual IPv4 addressfor the SAP Central ServicesInstance for ABAP (ASCS).

IP version 4address

172.19.15.15

7 Specify the netmask for thevirtual ASCS instance IP address

IP version 4address

255.255.255.0

8 Specify the network interfacename where your ASCS instancevirtual IP address is activated oneach node as alias

String (plusadditiona valuechecking)

Eth1

9 Specify the instance name of theABAP ERS instance

String

Minimum numberof characters: 5,maximum

number ofcharacters: 5 (plusadditional valuechecking)

SCS00

10 Specify the virtual host name ofthe ABAP ERS instance

String ERS00

11 Specify the virtual IPv4 addressfor the ABAP enqueue replicationserver (ABAP ERS).

IP version 4address

172.19.15.16

12 Specify the netmask for thevirtual ABAP ERS instance IPaddress

IP version 4address

255.255.255.0

13 Specify the network interfacename where the virtual IPaddress for the ABAP ERSinstance is activated on eachnode as alias.

String (plusadditional valuechecking)

Eth1

14 Do you want to automate theABAP application servers?

{yes|no} Yes

14.1 Enter the nodes where you wantto automate the applicationservers.

List of values,value type foreach value:Hostname

Node1

Node2

14.2 Specify all instance names ofyour application servers. Use thesame order as the nodes in oneof the previous questions.

List of values,value type foreach value:Hostname

J01

J02

14.3 Enter the start timeout value foryour ABAP application servers

Numeric 300

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 69

Table 6. ABAP policy parameters (continued)

SI Number Parameter Description Value type Value (Example)

14.4 Enter the stop timeout value foryour ABAP application servers.

Numeric 300

15 During installation, did youconfigure a virtual host name forat least one of the applicationservers specified in the previousquestion?

{yes|no} no

15.1 Specify the virtual host name foreach application server. Use thesame order as the nodes in oneof the previous questions. If youinstalled one of the applicationservers without a virtual hostname, specify the system hostname instead.

List of values,value type foreach value:Hostname

Node1

Node2

16 Do you want to automate theSAP Host Agent?

{yes|no} no

16.1 Enter the nodes where you wantto automate the SAP Host Agent.

List of values,value type foreach value:Hostname

Node1

Node2

17 Do you want PowerHASystemMirror to automate yourSAP router?

{yes|no} yes

17.1 Enter the prefix for the SAProuter resources

String SAP_ROUTER

17.2 Enter the nodes where you wantto automate the SAP router

List of values,value type foreach value:Hostname

Node1

Node2

17.3 Specify the virtual IPv4 addressthat clients will use to connect tothe SAP router

IP version 4address

172.19.15.17

17.4 Specify the netmask for thevirtual IP address of the SAProuter.

IP version 4address

255.255.255.0

17.5 Enter the network interface forthe SAP router IP address

String (plusadditional valuechecking)

Eth1

17.6 Specify the fully qualifiedrouting table filename of the SAProuter.

String /usr/sap/TS2/SYS/global/saprouttab

18 Do you want PowerHASystemMirror to automate theSAP web dispatcher?

{yes|no} yes

18.1 Enter the desired prefix for theSAP web dispatcher resources

String SAP_AWISP

18.2 Enter the nodes where you wantto automate the SAP webdispatcher.

List of values,value type foreach value:Hostname

Node1

Node2

18.3 Specify the SAP system ID(SAPSID) for the SAP webdispatcher

String (plusadditional valuechecking)

W0

70 PowerHA SystemMirror for Linux Version 7.2.2

Table 6. ABAP policy parameters (continued)

SI Number Parameter Description Value type Value (Example)

18.4 Specify the user name of theinstance owner that will be usedto execute the start, stop andmonitor commands for SAP webdispatcher resources.

String W0adm

18.5 Specify the instance name of theSAP web dispatcher instance, forexample W00

String W00

18.6 Specify the virtual host name forthe SAP web dispatcher.

Hostname Node1

Node2

18.7 Specify the virtual IPv4 addressthat clients will use to connect tothe SAP web dispatcher.

IP version 4address

172.19.15.18

18.8 Specify the virtual IP address ofthe netmask for the SAP webdispatcher.

IP version 4address

255.255.255.0

18.9 Specify the the virtual IP addressof the network interface onwhich SAP web dispatcher isactivated on each node as alias.

String (plusadditional valuechecking)

Eth1

19 If your database is automated byusing PowerHA SystemMirror inthe PowerHA SystemMirrorcluster, do you want to create theStartAfter relationships for yourapplication servers?

The StartAfter relationship iscreated for each applicationserver. If you want to create theStartAfter relationships for thedatabase, the database thatbelongs to the same cluster asthe SAP must be automated.

{yes|no} yes

19.1 Enter the name of the primaryresource of the PowerHASystemMirror database.

String (primaryresource name)

Example: SAP_HDB_SH0_HDB00_sr_primary_hdb

Table 7. JAVA policy parameters

SI Number Parameter Description Value type Value (Example)

1 Enter the name of your peerdomain cluster.

String C11

2 Specify the user name of the SAPinstance owner.

String plusadditional valuechecking

Ts2adm

3 Enter the prefix for all ABAPresources

String SAP_JAVA

4 Enter the nodes where you wantto automate SAP Central ServicesInstance for JAVA (SCS).

List of values,value type foreach value:Hostname

Node1

Node2

5 Specify the virtual host name forthe Central Services Instance forJAVA (SCS).

Hostname

6 Specify the virtual IPv4 addressfor the SAP Central ServicesInstance for JAVA (SCS).

IP version 4address

SAP_HBD

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 71

|

|

Table 7. JAVA policy parameters (continued)

SI Number Parameter Description Value type Value (Example)

7 Specify the netmask for the virtualinstance IP address of the SAPCentral Services Instance for JAVA(SCS).

IP version 4address

255.255.255.0

8 Specify the network interfacename where virtual IP addressinstance of JAVA SCS is activatedon each node as alias.

String (plusadditional valuechecking)

Eth1

9 Specify the instance name of theJAVA ERS instance

String

Minimumnumber ofcharacters: 5

Maximumnumber ofcharacters: 5(plus additionalvalue checking)

ERS00

10 Specify the virtual host name ofthe SAP JAVA enqueue replicationserver (JAVA ERS).

String Node1

Node2

11 Specify the virtual IPv4 addressfor the JAVA enqueue replicationserver (JAVA ERS).

IP version 4address

172.19.15.16

12 Specify the netmask for the IPaddress of the virtual JAVA ERSinstance.

IP version 4address

255.255.255.0

13 Specify the network interfacename where your JAVA ERSinstance virtual IP address isactivated on each node as alias.

String (plusadditional valuechecking)

Eth1

14 Do you want to automate theJAVA application servers?

{yes|no} Yes

14.1 Enter the nodes where you wantto automate the applicationservers.

List of values,value type foreach value:Hostname

Node1

Node2

14.2 Specify all instance names of yourapplication servers. Use the sameorder as the nodes in one of theprevious questions.

List of values,value type foreach value:Hostname

J01

J02

14.3 Enter the start timeout value foryour JAVA application servers

Numeric 500

14.4 Enter the stop timeout value foryour JAVA application servers.

Numeric 500

15 Did you configure a virtual hostname during installation for atleast one of the application serversspecified in the previous question?

{yes|no} no

15.1 Specify the virtual host name foreach application server. Use thesame order as the nodes in one ofthe previous questions. If youinstalled one of the applicationservers without a virtual hostname, specify the system hostname instead.

List of values,value type foreach value:Hostname

Node1

Node2

72 PowerHA SystemMirror for Linux Version 7.2.2

Table 7. JAVA policy parameters (continued)

SI Number Parameter Description Value type Value (Example)

16 Do you want to automate the SAPHost Agent?

{yes|no} no

16.1 Enter the nodes where you wantto automate the SAP Host Agent.

List of values,value type foreach value:Hostname

Node1

Node2

17 Do you want PowerHASystemMirror to automate yourSAP router?

{yes|no} yes

17.1 Enter the desired prefix for theSAP router resources

String SAP_ROUTER

17.2 Enter the nodes where you wantto automate the SAP router

List of values,value type foreach value:Hostname

Node1

Node2

17.3 Specify the virtual IPv4 addressthat clients will use to connect tothe SAP router

IP version 4address

172.19.15.17

17.4 Specify the netmask for the SAProuter virtual IP address

IP version 4address

255.255.255.0

17.5 Enter the network interface for theSAP router IP address

String (plusadditional valuechecking)

Eth1

17.6 Specify the fully qualified routingtable filename of the SAP router.

String /usr/sap/TS2/SYS/global/saprouttab

18 Do you want PowerHASystemMirror to automate the SAPweb dispatcher?

{yes|no} yes

18.1 Enter the desired prefix for theSAP web dispatcher resources

String SAP_WDISP

18.2 Enter the nodes where you wantto automate the SAP webdispatcher.

List of values,value type foreach value:Hostname

Node1

Node2

18.3 Specify the SAP system ID(SAPSID) for the SAP webdispatcher

String (plusadditional valuechecking)

JW1

18.4 Specify the user name of theinstance owner that will be usedto execute the start, stop andmonitor commands for SAP webdispatcher resources.

String Jw1adm

18.5 Specify the instance name of theSAP web dispatcher instance, forexample, W00.

String W00

18.6 Specify the virtual host name forthe SAP web dispatcher.

Hostname Node1

Node2

18.7 Specify the virtual IPv4 addressthat clients will use to connect tothe SAP web dispatcher.

IP version 4address

172.19.15.18

18.8 Specify the virtual IP address ofthe netmask for the SAP webdispatcher.

IP version 4address

255.255.255.0

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 73

Table 7. JAVA policy parameters (continued)

SI Number Parameter Description Value type Value (Example)

18.9 Specify the virtual IP address ofthe network interface on whichSAP web dispatcher is activatedon each node as alias.

String (plusadditional valuechecking)

Eht1

19 If your database is automated byusing PowerHA SystemMirror inthe PowerHA SystemMirrorcluster, do you want to create theStartAfter relationships for yourapplication servers?

The StartAfter relationship iscreated for each application server.If you want to create the StartAfterrelationships for the database, thedatabase that belongs to the samecluster as the SAP must beautomated.

{yes|no} yes

19.1 Enter the name of the primaryresource of the PowerHASystemMirror database.

String (primaryresource name)

Example: SAP_HDB_SH0_HDB00_sr_primary_hdb

Closing the Smart Assist wizardYou can close the Smart Assist wizard by choosing one of the following options:1. Use the 0 option to close the wizard. The user input is saved by using the clmgr smart_assist

command. If all parameter values are specified correctly which is indicated by the overall parameterstatus OK, you are prompted for the Smart Assist policy activation.

2. Use the X option to cancel the wizard. Confirm your cancel request to quit the wizard without savingthe changes. The parameter summary is not created and you cannot activate the policy. The Finish andCancel options are available in the Overview page and also in each parameter pane. The two actionsare run independently from the page or pane in which the option is selected.

Activating the Smart Assist wizard

If you used the 0 option to close the wizard and if all parameter values are specified correctly withoverall parameter status OK, you can activate the policy.

Depending on the option that you select, one of the following actions is performed:

Yes, activate as new policyThe Smart Assist policy is activated as a new policy. If you chose this option, the followingcommand is invoked:clmgr add smart_assist APPLICATION =SAP_HANA SID=<value> INSTANCE=<instance value>

Yes, activate by updating currently active policyThe Smart Assist policy is activated by updating the currently active policy. If you chose thisoption, the following command is invoked:clmgr update smart_assist APPLICATION=SAP_HANA SID=<value> INSTANCE=<instance value>

No, save modifications and exitThe Smart Assist policy is not activated. Your modifications are saved and the wizard is closed.

No, return to parameter overviewThe Smart Assist policy is not activated. Your modifications are not saved and the wizarddisplays the Overview page.

74 PowerHA SystemMirror for Linux Version 7.2.2

|

|

SAP NetWeaver operations

The following operations are performed by SAP NetWeaver:

Setup Smart AssistConfigure smart assist policies.

Add Smart AssistActivates the Smart Assist policy. All existing resources are deleted.

Update Smart AssistUpdates the active policy without stopping any resource. All existing resources are eithermodified or kept unchanged. The new resources are added to the policy.

Modify Smart AssistModify the active policy and remove all the existing resources. All resources that are not deletedare not stopped.

Delete Smart AssistDeactivates the active policy. All existing resources are deleted.

Verifying the SAP NetWeaver High Availability

You can verify the start and stop function of the SAP HANA High Availability feature if the installationruns successfully. You can also verify the failover scenario for planned and unplanned outages.

To verify the start and stop function of the SAP HANA High Availability feature, completes the followingsteps:v To start your SAP NetWeaver system, enter the clmgr online resource_group ALL command.v To display the different resource groups created by the wizard and their corresponding state, enter the

clRGinfo command.v To stop your SAP NetWeaver system, enter the clmgr offline resource_group ALL command.v You can verify whether the StartAfter relationship is configured between the application server and

SAP HANA by checking whether the application server resource groups change to an Online state onlyafter SAP HANA resource groups are in an Online state after running the clRGinfo command.

Note:

The StartAfter relationship between the SAP NetWeaver system (application server) and the SAPHANA High Availability feature is supported only when the SAP NetWeaver system and the SAPHANA High Availability feature are configured on the same cluster.

Troubleshooting PowerHA SystemMirror Smart Assist issuesThese topics describe how to troubleshoot PowerHA SystemMirror Smart Assist issues.

PowerHA SystemMirror is not able to harvest some values duringWizard executionThis topic examines the situations when PowerHA SystemMirror is not able to harvest some valuesduring wizard execution such as an instance name or a service IP address by using the specified hostname.

Problem

PowerHA SystemMirror is not able to harvest some values during Wizard execution.

Smart Assist for PowerHA SystemMirror Smart Assist for PowerHA SystemMirror 75

|||

|

|||

Solution

This problem has the following possible causes:v Check that all the nodes of the cluster are online. This can be checked by using the clmgr query node

-v command.v Check and run the harvest command that is mentioned in the Smart Assist Wizard manually in the

Linux system. If the harvest command works, it is probably due to the delay in system responsebecause Smart Assist waits for 10 seconds to get a response from the harvest command.

Replication mode that is configured in Smart Assist wizard is differentfrom replication mode shown in SAP HANA setupThis topic examines the situations when the replication mode that is configured in Smart Assist wizard isdifferent from what is configured manually during SAP HANA replication setup and this replicationmode is not shown in SAP HANA setup.

Solution

The SAP HANA Replication Mode and Replication Site Name that is configured while running the SmartAssist Wizard must be the same as specified during SAP HANA Replication setup.

Smart Assist policy fails to activate

Problem

PowerHA SystemMirror policy fails to activate with a reason Not able to find some file or path.

Solution

This problem has the following possible causes:v Check that all nodes of a cluster are online by using the clmgr query node -v command.v Check whether the other nodes of the cluster are reachable. An error might occur if one of the cluster

nodes associated with the Smart Assist policy is not reachable.

Smart Wizard does not detect or show one or more Ethernet interfacesin the list

Solution

Check whether the Ether interfaces are running. If interfaces are just brought up, it may take sometimefor Smart Assist to detect the interfaces.

76 PowerHA SystemMirror for Linux Version 7.2.2

Notices

This information was developed for products and services offered in the US.

IBM may not offer the products, services, or features discussed in this document in other countries.Consult your local IBM representative for information on the products and services currently available inyour area. Any reference to an IBM product, program, or service is not intended to state or imply thatonly that IBM product, program, or service may be used. Any functionally equivalent product, program,or service that does not infringe any IBM intellectual property right may be used instead. However, it isthe user's responsibility to evaluate and verify the operation of any non-IBM product, program, orservice.

IBM may have patents or pending patent applications covering subject matter described in thisdocument. The furnishing of this document does not grant you any license to these patents. You can sendlicense inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

For license inquiries regarding double-byte character set (DBCS) information, contact the IBM IntellectualProperty Department in your country or send inquiries, in writing, to:

Intellectual Property LicensingLegal and Intellectual Property LawIBM Japan Ltd.19-21, Nihonbashi-Hakozakicho, Chuo-kuTokyo 103-8510, Japan

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOTLIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express orimplied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodicallymade to the information herein; these changes will be incorporated in new editions of the publication.IBM may make improvements and/or changes in the product(s) and/or the program(s) described in thispublication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not inany manner serve as an endorsement of those websites. The materials at those websites are not part ofthe materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it believes appropriate withoutincurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) theexchange of information between independently created programs and other programs (including thisone) and (ii) the mutual use of the information which has been exchanged, should contact:

© Copyright IBM Corp. 2017, 2018 77

IBM Director of LicensingIBM CorporationNorth Castle Drive, MD-NC119Armonk, NY 10504-1785US

Such information may be available, subject to appropriate terms and conditions, including in some cases,payment of a fee.

The licensed program described in this document and all licensed material available for it are providedby IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement orany equivalent agreement between us.

The performance data and client examples cited are presented for illustrative purposes only. Actualperformance results may vary depending on specific configurations and operating conditions.

Information concerning non-IBM products was obtained from the suppliers of those products, theirpublished announcements or other publicly available sources. IBM has not tested those products andcannot confirm the accuracy of performance, compatibility or any other claims related to non-IBMproducts. Questions on the capabilities of non-IBM products should be addressed to the suppliers ofthose products.

Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice,and represent goals and objectives only.

All IBM prices shown are IBM's suggested retail prices, are current and are subject to change withoutnotice. Dealer prices may vary.

This information is for planning purposes only. The information herein is subject to change before theproducts described become available.

This information contains examples of data and reports used in daily business operations. To illustratethem as completely as possible, the examples include the names of individuals, companies, brands, andproducts. All of these names are fictitious and any similarity to actual people or business enterprises isentirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programsin any form without payment to IBM, for the purposes of developing, using, marketing or distributingapplication programs conforming to the application programming interface for the operating platform forwhich the sample programs are written. These examples have not been thoroughly tested under allconditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of theseprograms. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not beliable for any damages arising out of your use of the sample programs.

Each copy or any portion of these sample programs or any derivative work must include a copyrightnotice as follows:

© (your company name) (year).

Portions of this code are derived from IBM Corp. Sample Programs.

© Copyright IBM Corp. _enter the year or years_.

78 PowerHA SystemMirror for Linux Version 7.2.2

Privacy policy considerationsIBM Software products, including software as a service solutions, (“Software Offerings”) may use cookiesor other technologies to collect product usage information, to help improve the end user experience, totailor interactions with the end user or for other purposes. In many cases no personally identifiableinformation is collected by the Software Offerings. Some of our Software Offerings can help enable you tocollect personally identifiable information. If this Software Offering uses cookies to collect personallyidentifiable information, specific information about this offering’s use of cookies is set forth below.

This Software Offering does not use cookies or other technologies to collect personally identifiableinformation.

If the configurations deployed for this Software Offering provide you as the customer the ability to collectpersonally identifiable information from end users via cookies and other technologies, you should seekyour own legal advice about any laws applicable to such data collection, including any requirements fornotice and consent.

For more information about the use of various technologies, including cookies, for these purposes, seeIBM’s Privacy Policy at http://www.ibm.com/privacy and IBM’s Online Privacy Statement athttp://www.ibm.com/privacy/details the section entitled “Cookies, Web Beacons and OtherTechnologies” and the “IBM Software Products and Software-as-a-Service Privacy Statement” athttp://www.ibm.com/software/info/product-privacy.

TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corp., registered in many jurisdictions worldwide. Other product and service names might betrademarks of IBM or other companies. A current list of IBM trademarks is available on the web atCopyright and trademark information at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/orits affiliates.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarksor registered trademarks of Red Hat, Inc., in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Notices 79

80 PowerHA SystemMirror for Linux Version 7.2.2

Index

Special characters/var/pha/log/clmgr/clutils.log 47/var/pha/log/hacmp.out 47

Aapplication 15

Cclient 5cluster

client 5IP address takeover 10network 4, 8node 4, 7physical components 3resource groups 17reviewing message log files 47

communication interface 8configuration

standby 18example 18

takeover 20mutual 21one-sided 20two-node mutual 22

Configuring dependencies between resource groups 41Configuring netmon.cf file 9

EEvents 58example

standby configuration 18

Hheartbeating 10

point-to-point network 11TCP/IP network 11

IInstalling 57IP address takeover 10issues

PowerHA SystemMirror startup 48, 50, 51, 52, 53, 75

Llog

reviewing cluster message 47Log files 58Logging in 58logical network 8

MManaging users and user groups 5message log

reviewing 47monitoring

persistent node IP labels 16

NNavigating 58network 4, 8

communication interfaces 8heartbeating 10logical 8physical 8

node 4, 7

Ppersistent node IP label 16physical network 8Planning 55PowerHA SystemMirror 4PowerHA SystemMirror startup issues 48, 50, 51, 52, 53, 75

Rresource

applications 15service IP address 16service IP label 16

resource group 17fallback 17fallover 17startup 17

reviewingmessage log files 47

Sservice IP address 16service IP label 16Split configurations 12

TTroubleshooting 60

© Copyright IBM Corp. 2017, 2018 81

82 PowerHA SystemMirror for Linux Version 7.2.2

IBM®

Printed in USA