Upload
marktayloribm
View
289
Download
3
Embed Size (px)
Citation preview
InterConnect
2017HHM-6892 Availability and Scalability with IBM MQ Clustering
Mark [email protected]
1 2/27/2017
Please Note
IBM’s statements regarding its plans, directions, and intent are subject to
change or withdrawal without notice at IBM’s sole discretion.
Information regarding potential future products is intended to outline our
general product direction and it should not be relied on in making a
purchasing decision.
The information mentioned regarding potential future products is not a
commitment, promise, or legal obligation to deliver any material, code or
functionality. Information about potential future products may not be
incorporated into any contract. The development, release, and timing of
any future features or functionality described for our products remains at
our sole discretion.
Performance is based on measurements and projections using standard
IBM benchmarks in a controlled environment. The actual throughput or
performance that any user will experience will vary depending upon
many factors, including considerations such as the amount of
multiprogramming in the user’s job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no
assurance can be given that an individual user will achieve results similar
to those stated here.
Client 1
Over time…
Service 1
Client 2
Client 2
Client 3
Service 3
Service 2Service 1
QMgrQMgr QMgrQMgr
Client 1Client 1
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
Over time…
App 1
Service 1
Client 2
Client 2
Client 3
Service 2
App 1Client 1
Service 1
QMgr
QMgr
QMgr
QMgr
Client 3
Client 1
Client 4
App 4App 4Client 4
Service 4
Service 3
Service 1Service 1
Service 3
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
…Now you need a cluster
App 1
Service 1
Client 2
Client 2
Client 3
Service 2
App 1Client 1
Service 1
QMgr
QMgr
QMgr
QMgr
Client 3
Client 1
Client 4
App 4App 4Client 4
Service 4
Service 3
Service 1Service 1
Service 3
FR1ALTER QMGR REPOS(‘CLUS1’)
DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’) CONNAME(FR1 location)
DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’) CONNAME(FR2 location)
FR2ALTER QMGR REPOS(‘CLUS1’)
DEFINE CHANNEL(‘CLUS1.FR2’) CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’) CONNAME(FR2 location)
DEFINE CHANNEL(‘CLUS1.FR1’) CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’) CONNAME(FR1 location)
FR2
FR1
Step 1: Create your two full repositories
FR1
FR2
QMGR1
DEFINE CHANNEL(‘CLUS1.QMGR1’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR1 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
DEFINE QLOCAL(Q1)
CLUSTER(CLUS1)
Q1
FR2
QMGR1
FR1
QMGR1
Q1@QMGR1 FR1
FR2
DISPLAY CLUSQMGR(*)
DISPLAY QCLUSTER(*)Q1@QMGR1
Step 2: Add in more queue managers
FR1
FR2
QMGR1
DEFINE CHANNEL(‘CLUS1.QMGR1’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR1 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
DEFINE QLOCAL(Q1)
CLUSTER(CLUS1)
Q1
QMGR2 DEFINE CHANNEL(‘CLUS1.QMGR2’)
CHLTYPE(CLUSRCVR)
CLUSTER(‘CLUS1’)
CONNAME(QMGR2 location)
DEFINE CHANNEL(‘CLUS1.FR1’)
CHLTYPE(CLUSSDR)
CLUSTER(‘CLUS1’)
CONNAME(FR1 location)
FR2
QMGR1
QMGR2
FR1
QMGR1
QMGR2
Q1@QMGR1
Q1@QMGR1 FR1
FR2
FR1
FR2
Step 2: Add in more queue managers
FR1
FR2
QMGR1
Q1
QMGR2
FR2
QMGR1
QMGR2
FR1
QMGR1
QMGR2
Q1@QMGR1
Q1@QMGR1
Client 1
MQOPEN(Q1)
Q1?FR1
FR2
QMGR1
FR1
FR2
Q1?
Q1@QMGR1
Q1@QMGR1
Step 3: Start sending messages
FR1
FR2
QMGR1
QMGR2
Two full repository queue
managers
A cluster receiver
channel each
A single cluster sender
each
No need to manage pairs
of channels between
each queue manager
combination or their
transmission queues
No need for remote
queue definitions
So all you needed…
But what else do you get with a cluster?
• Workload Balancing
• Service Availability
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
• Target queues
• Transmission queues
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
Where can the messages get stuck?
Target queue
Transmission queue
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
The service queue manager/host fails
Message reallocationUnbound messages on the
transmission queue can be
diverted
QMgr
Locked messagesMessages on the failed queue manager
are locked until it is restarted
Restart the queue
managerUse multi-instance queue
managers or HA clusters to
automatically restart a
queue manager
Reconnect the serviceMake sure the service is
restarted/reconnects to the
restarted queue manager
When a queue manager fails:• Ensure messages are not bound to it
• Restart it to release queued messages
QMgr
QMgr
QMgr
• Cluster workload balancing does not take into account the
availability of receiving applications.
• Or a build up of messages.
Service 1
App 1App 1Client 1
Service 1
The service application fails
Blissful ignoranceThis queue manager is unaware
of the failure to one of the
service instances
Unserviced messagesHalf the messages will quickly
start to build up on the service
queue
• MQ provides a sample monitoring service
• Regularly checks for attached consuming applications
• Generally suited to steady state service applications
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
Monitoring for service failures
QMgrQMgr
Moving messagesAny messages that slipped
through will be transferred to
an active instance of the queue
Detecting a changeWhen a change to the open
handles is detected the cluster
workload balancing state is
modified
Sending queue managersNewly sent messages will be sent to
active instances of the queue
• Multiple locations for a client to connect to•Allows new requests when one queue manager is unavailable.
• Replies can be automatically routed back to the originating queue manager.
Service 1
Client 1
Service 1
QMgr
QMgr
QMgrQMgr
Client availability
Global applications
QMgr QMgr
QMgr QMgr
Service Service
Service Service
QMgr
QMgr
App 1App 1Client
QMgr
QMgr
App 1App 1Client
New York
London
• Prefer traffic to stay geographically local
• Except when you have to look further afield
• How do you do this with clusters?
USA
Europe
One cluster
QMgr
Service
QMgr
App 1App 1Client
New York
London
• Clients always open AppQ
• Local alias determines the preferred region
• Cluster workload priority is used to target geographically local cluster aliases
• Use of CLWLPRTY enables automatic failover•CLWLRANK can be used for manual failover
Service
App 1App 1Client
DEF QALIAS(AppQ) TARGET(NYQ)
DEF QALIAS(NYQ) TARGET(ReqQ)CLUSTER(Global)CLWLPRTY(9)
AppQ NYQ
ReqQ
A A
QMgr
AppQ
A
LonQ
A
QMgr
NYQ
ReqQ
A
LonQ
A
DEF QALIAS(AppQ) TARGET(LonQ)
DEF QALIAS(LonQ) TARGET(ReqQ)CLUSTER(Global)CLWLPRTY(4)
DEF QALIAS(LonQ) TARGET(ReqQ)CLUSTER(Global)CLWLPRTY(9)
DEF QALIAS(NYQ) TARGET(ReqQ)CLUSTER(Global)CLWLPRTY(4)
QMgr QMgr
QMgr QMgr
Service Service
Service Service
QMgr
QMgr
App 1App 1Client
QMgr
QMgr
App 1App 1Client
New York
London
USA
EUROPE
QMgrQMgr
QMgrQMgr
The two cluster alternative
• The service queue managers join both geographical clusters•Each with separate cluster receivers for each cluster, at different cluster priorities. Queues are clustered
in both clusters.
• The client queue managers are in their local cluster only.
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
Real time queries
Big data transfer
Audit events
Multiple types of traffic
• Often an MQ backbone will be used for multiple types of traffic
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
QMgr
QMgrQMgr
QMgr
QMgrQMgr
Channels
• Often an MQ backbone will be used for multiple types of traffic
• When using a single cluster and the same queue managers, messages all share
the same channels
• Even multiple cluster receiver channels in the same cluster will not separate out
the different traffic types
Multiple types of traffic
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
QMgr
QMgrQMgr
QMgr
QMgrQMgr
Cluster
Cluster
ClusterChannels
Channels
Channels
• Often an MQ backbone will be used for multiple types of traffic
• When using a single cluster and the same queue managers, messages
all share the same channels
• Even multiple cluster receiver channels in the same cluster will not separate out
the different traffic types
• Multiple overlaid clusters with different channels enable separation
Multiple types of traffic
QMgr
QMgrQMgr
Service 1
Client 1
Service 1
QMgr
Service 2
Client 2
• Cluster workload balancing is at the channel level.• Messages sharing the same channels, but to different target queues will be counted
together.
• The two channels here have an even 50/50 split of messages…
• …but the two instances of Service 1 do not!
• Split Service 1 and Service 2 queues out into separate clusters, queue managers or customise workload balancing logic.
100
50
75
25
50
• Multiple applications sharing the samequeue managers and the samecluster channels.
150
Queue 175
Queue 1
Queue 2
75
Workload balancing level interference
QMgrCluster transmit queue
Separation of Message Traffic With a single transmission queue there is
potential for pending messages for cluster ChannelA tointerfere with messages pending for cluster ChannelB
Management of messages Use of queue concepts such as MAXDEPTH not useful
when using a single transmission queue for more than one channel.
Monitoring Tracking the number of messages processed by a cluster channel currently difficult/impossible
using queue.
Performance? In reality a shared transmission queue is not always the bottleneck, often other solutions to
improving channel throughput (e.g. multiple cluster receiver channels) are really what’s needed.
A much requested feature… Multiple cluster transmission queues
QMgr
QMgr
QMgr
QMgr
V7.5 V8Distributed z/OS & IBM i
Multiple cluster transmit queues: Automatic
Configured on the sending queue manager, not the owners of the cluster
receiver channel definitions.
Queue Manager switch to automatically create a dynamic transmission queue
per cluster sender channel. ALTER QMGR DEFCLXQ( CHANNEL )
Dynamic queues based upon model queue. SYSTEM.CLUSTER.TRANSMIT.MODEL
Well known queue names. SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME>
QMgr QMgr
QMgr
ChlA
ChlB
SYSTEM.CLUSTER.TRANSMIT.ChlA
SYSTEM.CLUSTER.TRANSMIT.ChlC
SYSTEM.CLUSTER.TRANSMIT.ChlB
ChlA
ChlC
ChlC
ChlB
QMgr QMgr
QMgr
Multiple cluster transmit queues: Manual
Still configured on the sending queue manager, not the owners of the cluster
receiver channel definitions.
Administratively define a transmission queue and configure which cluster sender
channels will use this transmission queue. DEFINE QLOCAL(GREEN.XMITQ) CLCHNAME(GREEN.*) USAGE(XMITQ)
Set a channel name pattern in CLCHNAME
Single/multiple channels (wildcard)
E.g. all channels for a specific cluster
(assuming a suitable channel naming convention!)
Any cluster sender channel not
covered by a manual transmission
queue defaults to the DEFCLXQ
behaviourGreen.A
Pink.B
Pink.A
GREEN.XMITQ Green.A
Pink.A
Pink.B
PINK.XMITQ
Notices and disclaimers
Copyright © 2017 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM.
U.S. Government Users Restricted Rights — use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM.
Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. This document is distributed “as is” without any warranty, either express or implied. In no event shall IBM be liable for any damage arising from the use of this information, including but not limited to, loss of data, business interruption, loss of profit or loss of opportunity. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided.
IBM products are manufactured from new parts or new and used parts. In some cases, a product may not be new and may have been previously installed. Regardless, our warranty terms apply.”
Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice.
Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary.
References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or
services available in all countries in which IBM operates or does business.
Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation.
It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actionsthe customer may need to take to comply with such laws. IBM does
not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law.
Notices and disclaimers continued
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s products. IBM expressly disclaims all warranties, expressed or implied, including but not limited to, the implied warranties of merchantability and fitness for a particular, purpose.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right.
IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document Management System™, FASP®, FileNet®, Global Business Services®,Global Technology Services®, IBM ExperienceOne™, IBM SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®,
PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ, Tealeaf®, Tivoli® Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.