7
Control-M Automation API, Docker and Microservices White Paper Solution White Paper 1 Control-M Automation API, Docker, and Microservices How customers deliver Digital Business Automation for the ultimate competitive advantage

Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

1

Control-M Automation API,Docker, and MicroservicesHow customers deliver Digital Business Automationfor the ultimate competitive advantage

Page 2: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

2

Table of Contents

• 03 Executive Overview • 03 Containers • 04 Microservices • 04 Devops • 05 Why Batch Job Scheduling? • 06 Customer Use Cases • 06 The Control-M Approach To Container Automation • 07 Conclusion

Page 3: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

3

Executive Overview

Containers

Today, every company relies on technology to gain a business advantage or to respond to competitive pressure. A dizzying array of new methods, approaches, and tools can help organizations achieve those goals. Among this collection are containers, microservices, and DevOps. Each can be used independently, but more organizations are combining the three, often with cloud computing, to create their modern technology platforms.

As we have seen in the past, new technologies and approaches “stand on the shoulders of giants” rather than suddenly appearing fully formed and ready to solve all the world’s problems. The same situation applies to containers, microservices, and DevOps, which rely on mature solutions that already deal with the massive complexity of a typical enterprise production environment, and are sufficiently adaptable to address new requirements.

Among the many reasons for Docker’s popularity is the broad collection of tools for container management and resource scheduling like Swarm, Kubernetes, Mesos, and Openshift.

The growing complexity of dynamic infrastructure, combined with massive diversity and volume of data require new development practices to drive accelerated application delivery. Control-M provides a comprehensive platform that helps organizations develop, deliver, execute, monitor, manage, and automate the applications that underpin today’s digital business.

This paper discusses experiences gathered from the field during customer engagements that have led to implementations of job management in a containerized environment. We also explore the relationship of DevOps and containers to microservices, and how these three technology trends interact in the observed customer environments.

There are several container solutions available today, but Docker has emerged as the most popular. For the remainder of this paper, Docker will be the term used to represent containerization. Docker packages an application in a container that behaves like a virtual server and can run on any host supported by Docker Engine (all Linux and Windows servers). This provides portability to execute applications on any infrastructure and isolation from infrastructure dependencies, thus simplifying deployment. Containers are lightweight compared to virtual machines, enabling quick startup and a high degree of scalability.

Docker allows developers to focus on building the very best solutions without having to worry about the idiosyncrasies of infrastructure or middleware. A Dockerized application that runs on a developer’s laptop will run with no changes, in any data center, no matter how complex or different from the laptop. This is because all of the components on which the application depends are packaged inside the container.

It’s important to note that a container is essentially a lightweight ‘host’. The life of a container can be very short and the number of containers can become very large. However, applications running in a container still have all the same management requirements as any traditional application running on a physical or virtual machine.

Page 4: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

4

Devops

The Whole Is Greater Than the Sum of Its PartsCombining microservices and containers, within the modern CI/CD automation pipeline, and DevOps processes creates drastic improvements in the speed and quality of the delivered applications. A microservices approach makes it easier to change and add functions and qualities to the system at any time. It also allows the architecture of an individual service to evolve by continuously updating, thus reducing the need for a big upfront design and allowing for early and continuous software release.

Microservices

Microservices architecture is an approach for flexible, independently deployable software systems, making this architecture popular for building continuously deployed applications. Services in a microservices architecture are relatively small and granular. Microservices communicate with each other over a network using lightweight protocols. Containers can be a great choice for service components in a microservices-architected application, but they aren’t mandatory or the only choice.

Page 5: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

5

Jobs-as-CodeToday’s development teams tend to work in an agile fashion applying CI/CD principles and using a highly automated toolchain for building, testing, and promoting their applications. By using Control-M and the Jobs-as-Code approach for business application automation, developers and engineers can now build application automation artifacts, such as job flow definitions, using JSON. They can then debug and test them using a private sandbox on their personal machines. Once the coding and initial testing iscomplete, those artifacts can be committed to the source code management (SCM) of their choice and can participate in the complete automated toolchain that builds, validates, tests, and promotes the business logic and all other application components. Whether teams are building brand new applications or refactoring monolithic solutions to microservices architectures, Control-M gives them complete freedom to build and test using their preferred tools, full control and ownership over all application components, and fully instrumented, enterprise-grade services ready to run in the most complex production environments.

Why Batch Job Scheduling?

Containerized applications have the same job workflow management requirements as traditional applications, in addition to some new and unique requirements. For example, an application running within a container may still need to start only upon the occurrence of some event, such as a specific date and time, the completion or start of another application, or arrival or creation of some data. Once the containerized application completes, it’s necessary to determine success or failure to identify subsequent actions such as triggering the execution of other containers, traditional applications, or recovery actions.

Managing such dependency relationships is fundamental to ensuring correct sequencing for application execution. It also becomes the foundation for managing end-to-end business processes and their service levels as opposed to tracking technical container objects. These are very traditional capabilities and requirements of enterprise job scheduling.

Some have suggested taking a peer-to-peer approach to managing workflows, but such solutions are difficult to scale as management requirements and application complexities increase. ‘Publish/Subscribe’ models may be acceptable for modest workflows, but quickly encounter basic challenges in tracking process lineage, managing SLAs, and answering fundamental questions such as, “Where are we in a business process?” or “How close to done are we?”.

Let’s look at some unique characteristics of containers. One characteristic is their transient nature. The container usually disappears once the application completes. Any logs, output, or audit records that need to be collected must be extracted before the container disappears.

Containers also launch quickly and require fewer resources than a conventional virtual machine. Several factors contribute to these properties, including architectural differences that are beyond the scope of our discussion. However, physical size and the processing required to initialize a container to perform useful work are directly affected by choices that can be made in the way job management is implemented. For example, if image size is an important factor, agentless management becomes a strong requirement as it removes the need for an agent embedded in the container. If launch speed is important and agent-based reliability is required, it becomes highly desirable to have agents that can be pre-installed when images are constructed, but ‘wired’ or registered dynamically when containers start.

Finally, due to the dynamic nature of containers and their scale-out characteristics, jobs cannot be bound to individual containers. Instead, every job and execution instance must be able to run on any eligible container, both for initial execution as well as rerun, due to error recovery or a cyclic processing requirement (e.g., run every 2 minutes).

For organizations adopting modern application delivery practices, a microservices approach enables applications to be broken down into separate independent components that can be built and updated quickly by small teams. Each component can be deployed according to its own schedule without impacting other teams. A common, but not mandatory, characteristic is that a microservice component fits well into a container. The combination of these technology approaches in modern applications is increasing and becoming a frequent implementation choice. Thus, it’s common that containerized applications are built using continuous integration and continuous delivery (CI/CD). Furthermore, batch containerized applications will benefit tremendously from a Jobs-as-Code approach.

Page 6: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

6

The Control-M Approach To Container Automation

Microservices architecture is an approach for flexible, independently deployable software systems, making this architecture popular for building continuously deployed applications. Services in a microservices architecture are relatively small and granular. Microservices communicate with each other over a network using lightweight protocols. Containers can be a great choice for service components in a microservices-architected application, but they aren’t mandatory or the only choice.

Customer Use Cases

Let’s take a closer look at real-world use cases to see how Control-M customers are adopting various container technologies like Docker, ECS (Amazon’s Elastic Container Service), Azure Container Service, Rocket, Container Service on Alibaba Cloud, and Google Container Engine. A variety of management solutions appear, such as Mesos, Kubernetes, and Docker Swarm. Many Control-M customer engagements already display this diversity:

Traditional Financial Services OrganizationA traditional financial services organization is deploying a new microservices-based architecture the customer created themselves. This architecture is built around a set of tools, including Docker, which helps manage the full application lifecycle (i.e., design, development, build, deployment, etc.). This initiative is expected to reduce time-to-market of new apps and products by simplifying maintenance and testing, and providing scalability and fault-tolerant operation.

Initially, microservices will be used for developing new applications. Eventually, existing monolithic applications will also be converted to the new architecture. Among the tools being used to manage their new architecture are those focused on the lifecycle of applications running in containers, including Docker Swarm, Jenkins, and a custom “backend” that the customer developed internally. Adoption of container technology is at an early stage, but there already are containers running in preproduction and production performing tasks such as payment reconciliation, risk analysis calculations, and database updates and queries. These applications are running inside containers and taking advantage of Control-M to manage dependencies with jobs running in other containers and on ‘conventional’ hosts.

Online Payment ProcessorAn online payment processor is re-engineering payment systems that are currently monolithic and run in a single data center. As the customer’s technology infrastructure is moving to more flexible and dynamic platforms and being distributed around the world, they desire to make their applications more resilient and scalable to support an expanding global customer base.

This business need is translating into the use of multi-source public clouds and new applications that can take advantage of massive scale in a completely dynamic fashion. The customer is moving towards Mesos and Docker to achieve those goals. Since these are highly critical payment processing applications, the customer requires sophisticated scheduling, powerful SLA management, and operational visibility into the execution of mission-critical workloads.

IT Services Provider for the Travel IndustryAn IT services provider for the travel industry is running approximately 300,000 daily batch jobs that are critical to their business and also form the backbone of the global travel industry. For example, the customer delivers passenger manifests to international security agencies. That information must be delivered and confirmed before an aircraft is allowed to enter its target country’s airspace. Without the manifest, aircrafts are either not allowed to leave the ground or must be diverted to a nearby country willing to accept them.

When operating a data center that processes about 30,000 transactions per second and about 50 billion SQL queries daily, upgrading infrastructure is a task that seems difficult to accomplish while keeping the business in flight. However, this is exactly what the customer is doing while re-hosting those mission-critical applications with no disruption. The new private cloud architecture relies on Docker, Kubernetes, and Openshift running on Openstack infrastructure. Each Kubernetes pod runs a container with an embedded Control-M agent baked-in during image creation. Upon instantiation, the agent registers dynamically with the workload manager (Control-M Server) and connects itself to a host group (logical group of compute resources that makes up a work queue). The Control-M agent image includes full support for the same secure file transfer facility managing data movement in the traditional infrastructure environment, allowing work to move transparently between the old and new environments, while preserving the same operational views and facilities familiar to support and operations teams.

Page 7: Control-M Automation API, Docker, and Microservicesbmc.ellpreview.com/wp-content/uploads/2017/10/BMC_WP_Control-M-API.pdf · Managing such dependency relationships is fundamental

Control-M Automation API, Docker and Microservices White Paper

Solution White Paper

7

Conclusion

For More Information

Control-M is the market leader in managing complex business workloads, not only because of its ability to adapt to the constant changes in technology, but also because of the constant stream of unique functionality that delivers comprehensive business value. Previous innovations include being the first enterprise solution to offer seamless manageability across mainframe, distributed, and open systems. It is also the first to add service-level management for batch workloads, goal-seeking forecast modeling, and predictive analytics through self-learning statistics accumulation.

Control-M Automation API for containerized and hyper-heterogeneous environments unleashes all of the capabilities that have made Control-M the most powerful Digital Business Automation solution over the last several decades.

To learn more about Jobs-as-Code, visit bmc.com/jobsascode. For a sample implementation of container management and other Jobs-as-Code best practices and how-to tips, visit controlm.github.io.

BMC is a global leader in innovative software solutions that enable businesses to transform into digital enterprises for the ultimate competitive advantage. Our Digital Enterprise Management solutions are designed to fast track digital business from mainframe to mobile to cloud and beyond.BMC – Bring IT to LifeBMC digital IT transforms 82 percent of the Fortune 500.

BMC, BMC Software, the BMC logo, and the BMC Software logo, and all other BMC Software product and service names are owned by BMC Software, Inc. and are registered or pending registration in the US Patent and Trademark Office or in the trademark offices of other countries. All other trademarks belong to their respective companies. © Copyright 2017 BMC Software, Inc.

With Control-M Automation API, requirements specific to containers are also addressed:

• To manage business application components running in a container, a Control-M agent must be embedded in the container image. The provision service makes this simple when building the Docker image.

• Containers are highly dynamic and have no identity until they are instantiated (e.g., via a Docker ‘run’ command). This means the registration and configuration normally performed to make an agent operational must be done automatically at run time. That registration is accomplished using additional ‘provision’ functions and the agent is dynamically added into a logical ‘queue’ using ‘configure’ services. Any work waiting for resources to become available in that ‘queue’ are then dispatched to run in the newly instantiated container and managed by the embedded agent.

• Containers can be dynamic with short lifespans and it is imperative to capture job output for problem analysis and auditing. Control-M Workload Archiving provides persistent and easily accessible storage for output that may be critical to the immediate problem resolution need, and is a reliable repository for auditing over the long-term. There are flexible retention rules, text search capabilities, and side-by-side graphical ‘diff’ views. Perhaps most important, the archived data is directly accessible from Control-M user interfaces so that when problem analysis is performed, output data for the specific jobs under investigation, as well as their previous execution instances, are easily available.

• This approach seamlessly embeds containerized application components into the end-to-end business process. You can now manage the entire flow regardless of whether it consists entirely of containers or is a mix of containers and traditional applications.

• Jobs-as-Code workflows are defined using JSON so they can be edited with any integrated development environment (IDE), managed in a source code management (SCM) system, and travel together with all other application artifacts.