42
WHITEPAPER 2018 www.devopsconference.de 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture, Cloud Platforms & Security! DevOps Conference @devops_con, #DevOpsCon DevOps Conference DevOps Conference May 28 – 31, 2018 | Berlin Expo: May 29 & 30, 2018

WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

WHITEPAPER 2018

www.devopsconference.de

40+ pages of knowledge for DevOps Enthusiasts. All about Docker,

Kubernetes, Continuous Delivery, DevOps Culture, Cloud Platforms & Security!

DevOps Conference @devops_con, #DevOpsConDevOps Conference DevOps Conference

May 28 – 31, 2018 | Berlin Expo: May 29 & 30, 2018

Page 2: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DockerContinuous Deployment with Docker Swarm 4Services and Stacks in the Clusterby Tobias Gesellchen

Top Docker tips from 12 Docker Captains 12

KubernetesHow to build up-to-date (container) applications 16Kubernetes Basicsby Timo Derstappen

Taking the pulse of DevOps: “Kubernetes has won the orchestration war” 20Interview with Nicki Watt, CTO at OpenCredoby Gabriela Motroc

Continuous DeliveryOpenShift, Kubernetes & Jenkins: „We wanted to show a start-up can develop entirely within the cloud“ 23Interview with Clemens Utschig-Utschig

DevOps CultureCollaboration or survival of the fittest: Who runs the DevOps world? 25Interview series with DevOps influencers – Part 1

Top 20 social influencers in DevOps 2018 28It‘s time for a shout out to DevOps influencers

SecurityAutomating DevOps – The technology missing link 30Laying the DevSecOps foundationby Dr. Rao Papolu

Index

Page 3: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

IndexMicroservicesMicroservices: From a monorepo to a microplatform 32No more monolithby Stuart Harris

Microservices are more than just a hype 36Interview with Kai Tödter

Cloud PlatformsHow to capture the multi-cloud opportunity 38Will you be part of the multi-cloud (r)evolution?by Dan Lahl

ServerlessThe road to serverless maturity: Running away from “NoOps” or toward it? 40Interview series with JAX DevOps speakers

Page 4: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

4

Services and Stacks in the Cluster

Continuous Deployment with Docker SwarmIn the DevOps environment, Docker can no longer be reduced to only a container runtime. An application that is divided into several microservices has greater orchestra-tion requirements instead of simple scripts. For this, Docker has introduced the service abstraction Docker Swarm to help orchestrate containers across multiple hosts.

By Tobias Gesellchen

Docker Swarm: The way to continuous deploymentDocker Swarm is available in two editions. As a stand-alone solution, the older variant requires a slightly more complex set-up with its own key-value store. The newer variant, also called “swarm mode”, has been part of the Docker Engine since Docker 1.12 and no longer needs a special set-up. This article only deals with swarm mode as it is recommended by the official authorities and has been developed more intensively. Before we delve deeper into the Swarm, let’s first look at what Docker Services are and how they relate to the well-known Docker Im-ages and containers.

Docker Swarm: From containers to tasksTraditionally, developers use Docker Images as a means of wrapping and sharing artifacts or applications. The method of using complete Ubuntu images as Docker Images (which was initially common) has already been overtaken by minimal binaries in customized operating systems like Alpine Linux. The interpretation of a con-tainer has changed from virtual machine replacement to process capsule. The trend towards minimal Docker Images enables greater flexibility and better resource

conservation. This way, both storage and network are less stressed, and additionally provide smaller images with fewer features, which leads to a smaller attack sur-face. Therefore, starting up containers is faster, and you have better dynamics. With this dynamic, a microservice stack is really fun to use and even paves the way for pro-jects like Functions as a Service.

However, Docker Services don’t obsolete containers, but complement configuration options, such as the de-sired number of replicas, deployment constraints (e. g., do not set up proxy on the database node) or update policies. Containers with their service-specific proper-ties are called “tasks” in the context of services. Tasks are therefore the smallest unit that runs within a service. Since containers are not aware of the Docker Swarm and its service abstraction, the task acts as a link be-tween swarm and container.

You can set up a service, for example based on the image nginx:alpine, with three replicas so that you re-ceive a fail-safe set-up. The desired three replicas express themselves as three tasks and thus as containers, which are distributed for you by Docker Swarm on the avail-able set of Swarm Nodes. Of course, you can’t achieve fail-safe performance just by tripling the containers. Rather, Docker Swarm now knows your desired target configuration and intervenes accordingly if a task or node should fail.

Page 5: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

5

Dive right inIn order to make the theory more tangible, we go through the individual steps of a service deployment. The one prerequisite is a current Docker release; I am using the current version 17.07 on Docker for Mac. Incidentally, all of the examples can be followed on a single computer, but in a productive environment, they are only useful across different nodes. All aspects of a production environment can be found in the  official documentation. This article will only be able to provide selected hints.

The Docker Engine starts by default with disabled Swarm Mode. To enable it, enter on the console: docker swarm init.

Docker acknowledges this command by confirming that the current node has been configured as a man-ager. If you have already switched the Docker Engine to Swarm Mode before, an appropriate message will be displayed.

Docker Swarm differentiates between managers and workers. Workers are available purely for deploying tasks, while managers also maintain the Swarm. This in-cludes continuously monitoring the services, comparing them with the desired target state and possibly reacting to deviations. Three or even five nodes are set up as manag-ers in a production environment to ensure that the Swarm manager retains its ability to make decisions in the event of a manager’s failure. These maintain the global cluster state via raft log, so that if the leader manager fails, one of the other managers assumes the role of a leader. If more than half of the managers fail, an incorrect cluster state can no longer be corrected. However, tasks that are al-ready running on intact nodes remain in place.

In addition to the success message, the command en-tered above also displays a template for adding worker nodes. Workers need to reach the manager at the IP address at the very end of the command. This can be difficult for external workers under Docker for Mac or Docker for Windows because on these systems, the engine runs in a virtual machine that uses internal IP addresses.

The examples become a bit more realistic if we start more worker nodes locally next to the manager. This can be done very easily with Docker by starting one con-tainer per worker in which a Docker Engine is running. This method even allows you to try different versions of the Docker Engine without having to set up a virtual machine or a dedicated server.

In our context, when services are started on individual workers, it is also relevant that each worker must pull the required images from the Docker Hub or another registry. With the help of a local registry mirror, these downloads can be slightly optimized. That’s not every-thing: we set up a local registry for locally-built images, so that we aren’t forced to push these private images to an external registry such as the Docker Hub for deploy-ment. How to set up the complete setup using scripts has already been described.

To simplify the set-up even further, Docker Compose is available. You can find a suitable docker-compose.yml, on GitHub, which starts three workers, a registry and a registry mirror. The following commands set up the necessary environment to help you understand the examples described in the article.

git clone https://github.com/gesellix/swarm-examples.gitcd swarm-examplesswarm/01-init-swarm.shswarm/02-init-worker.sh

All other examples can also be found in the named repository. Unless described otherwise, the commands are executed in its root directory.

The first serviceAfter the local environment is prepared, you can deploy a service. The nginx as a triple replica can be set up as follows:

docker service create \  --detach=false \  --name proxy \  --constraint node.role==worker \  --replicas 3 \  --publish 8080:80 \  nginx:alpine

Most options such as -name or -publish should not be a surprise; they only define an individual name and configure port mapping. In contrast to the usual docker run, -replicas 3 directly defines how many instances of the nginx are to be started, and -constraint=… requires that service tasks may only be started on worker nodes and not on managers. Additionally,  -detach=false  al-lows you to monitor the service deployment. Without this parameter, or -detach=true, you can continue work-ing directly on the console and the service is deployed in the background.

The command instructs the Docker Engine to down-load the desired image on the individual workers, cre-ate tasks with the individual configuration, and start the containers. Depending on the network bandwidth, the initial download of the images takes the longest. The start time of the containers depends on the concrete im-ages or the process running in the container.

If you want to run a service on each active node instead of a specific number of replicas, the service can be started with –mode global. If you subsequently add new node workers to the Swarm, Docker will automatically extend the global-Service to the new nodes. Thanks to this kind of configuration, you no longer have to manually increase the number of replicas by the number of new nodes. Commands such as docker service ls and docker service ps proxy show you the current status of the service or its tasks after deployment. But even with conventional commands like docker exec swarm_worker2_1 docker

Page 6: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

6

ps, you will find the instances of nginx as normal con-tainers. You can download the standard page of nginx via browser or curl at http://localhost:8080.

Before we look at the question of how three contain-ers can be reached under the same port, let’s look at how Docker Swarm restores a failed task. A simple docker kill swarm_worker2_1, which removes one of the three containers, is all that is needed for the Swarm to create a new task. In fact, this happens so fast that you should already see the new container in the next docker service ps proxy. The command shows you the task history, i. e. also the failed task. Such automatic self-healing of failed tasks can probably be regarded as one of the core fea-tures of container managers. With swarm/02-init-work-er. sh you can restart the just stopped worker.

Docker Swarm allows you to configure how to react to failed tasks. For example, as part of a service update, the operation may be stopped, or you may want to roll back to the previous version. Depending on the context, it makes sense to ignore sporadic problems so that the service update is attempted with the remaining replicas.

Load Balancing via Ingress NetworkNow, we return to the question of how the same port is bundled on three different containers in one service. In fact, the service port is not tied to the physical network interface with conventional means per container, but the Docker Engine sets up several indirections that route incoming traffic over virtual networks or bridges. Spe-cifically, the Ingress Network was used for the request at http://localhost:8080, which can route packages to any service IP as a cross-node overlay network. You can also view this network with docker network ls and ex-amine it in detail with docker network inspect ingress.

Load Balancing is implemented at a level that also enables the uninterrupted operation of frontend proxies. Typically, web applications are hidden behind such prox-ies in order to avoid exposing the services directly to the Internet. In addition to a greater hurdle for potential at-tackers, this also offers other advantages, such as the abil-ity to implement uninterrupted continuous deployment. Proxies form the necessary intermediate layer to provide the desired and available version of your application.

The proxy should always be provided with security corrections and bugfixes. There are various mechanisms to ensure that interruptions at this level are kept to a minimum. When using Docker Services, however, you no longer need special devices. If you shoot down one instance of the three nginx tasks as shown above, the other two will still be accessible. This happens not only locally, but also in a multi-node Swarm. The only re-quirement is a corresponding swarm of docker engines and an intact ingress network.

Deployment via serviceupdateSimilar to the random or manual termination of a task, you can also imagine a service update. As part of the ser-vice update, you can customize various properties of the

service. These include the image or its tag, you can change the container environment, or you can customize the ex-ternally accessible ports. In addition, secrets or configs available in the Swarm can be made available to a service or withdrawn again. Describing all the options here would go beyond the scope of the article, the official documenta-tion will help you in detail. The following example shows you how to add an environment variable FOO and how to influence the process flow of a concrete deployment:

docker service update \  --detach=false \  --env-add FOO=bar \  --update-parallelism=1 \  --update-order=start-first \  --update-delay=10s \  --update-failure-action=rollback \  proxy

At first glance, the command looks very complex. Ul-timately, however, it only serves as an example of some options that you can tailor to your needs with regard to updating. In this example, the variable in the containers is supplemented by -env-add. This is done step-by-step across the replicas (-update-parallelism=1), whereby a fourth instance is started temporarily before an old version is stopped (-update-order=start-first). Between each task update, there is a delay of ten seconds (-update-delay=10s) and in case of an error, the service is rolled back to the previous version (-update-failure-action=rollback).

In a cluster of swarm managers and workers, you should avoid running resource-hungry tasks on the manager nodes. You probably don’t want to run the proxy on the same node as the database. To map such rules, Docker Swarm allows configuring  Service con-straints. The developer expresses these constraints us-ing labels. Labels can be added to or removed from the docker service create and via docker service update.

Also visit this Session:

Running Databases in Docker at ScaleJoakim Recht (Uber)

Running services in Docker has become more or less mainstream. Running da-tabases in Docker, however, is not that widespread. At Uber, we’re migrating

all our storage solutions to run in Docker containers, and in this talk we will look at why that’s a good idea and what kind of tooling we’re building to support running many thousands of MySQL databases, Cas-sandra nodes, ElasticSearch nodes, and more.

Page 7: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

7

Labels on services and nodes can be changed without even interrupting the task. You have already seen an ex-ample above as node. role==worker, for more examples see the official documentation.

Imagine that you not only have to maintain one or two services, but maybe ten or twenty different micros-ervices. Each of these services would now have to be deployed using the above commands. Service abstrac-tion takes care of distributing the concrete replicas to different nodes.

Individual outages are corrected automatically, and you can still get an overview of the health of your con-tainers with the usual commands. As you can see, the command lines still take an unpleasant length. We have not yet discussed how different services can communi-cate with each other at runtime and how you can keep track of all your services.

Inter-service-communicationThere are different ways to link services. We have al-ready mentioned Docker’s so-called overlay networks, which allow node-spanning (or node-ignoring) access to services instead of concrete containers or tasks. If you want the proxy configured above to work as a reverse proxy for another service, you can achieve this with the commands from Listing 1.

Listing 1docker network create \  --driver overlay \  app docker service create \  --detach=false \  --name whoami \  --constraint node.role==worker \  --replicas 3 \  --network app \  emilevauge/whoami docker service update \  --detach=false \  --network-add app \  proxy

After the creation of an overlay network app, a new Service  whoami  is created in this network. Then the proxy from the example above is also added to the net-work. The two services can now reach each other us-ing the service name. Ports do not have to be published explicitly for whoami, but docker makes the ports de-clared in the image via EXPOSE accessible within the network. In this case, the whoami-Service listens within the shared network on port 80.

All that is missing now is to configure the proxy to forward incoming requests to the whoami-Service. The nginx can be configured like in Listing 2 as a reverse proxy for the whoami-Service.

Listing 2upstream backend {  server whoami;} server {  listen 80;   location / {    proxy_pass http://backend;    proxy_connect_timeout 5s;    proxy_read_timeout 5s;  }}

The matching Dockerfile is kept very simple, because it only has to add the individual configuration to the standard image:

FROM nginx:alpineRUN rm /etc/nginx/conf.d/*COPY backend.conf /etc/nginx/conf.d/

The code can be found in the GitHub repository men-tioned above. The following commands build the indi-vidual nginx image and load it into the local registry. Afterwards, the already running nginx is provided with the newly created image via service update:

docker build -t 127.0.0.1:5000/nginx -f nginx-basic/Dockerfile nginx-basicdocker push 127.0.0.1:5000/nginx docker service update \  --detach=false \  --image registry:5000/nginx \  proxy

The service update shows that the image name instead of 127.0.0.1 is now registry as the repository host. This is necessary because the image should be loaded from the worker’s point of view and they only know the local registry under the name registry. However, the manager cannot resolve the registry hostname, thereby not veri-fying the image and therefore warns against potentially differing images between the workers during the service update.

After a successful update you can check via curl http://localhost:8080 if the proxy is reachable. Instead of the nginx default page, the response from the whoami-Ser-vice should now appear. This response always looks a bit different for successive requests, because the round-robin loadbalancing mode in Docker always redirects you to the next task. The best way to recognize this is the changed hostname or IP. With docker service update -replicas 1 whoami or docker service update -replicas 5 whoami you can easily scale up or down the service, while the proxy will always use one of the available instances.

Page 8: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

8

Figure 1  shows an overview of the current Swarm with three work-er nodes and a manager. The dashed arrows fol-low the request on http://localhost:8080  through the two overlay net-works  ingress  and  app. The request first lands on the nginx task proxy. 2, which then acts as reverse proxy and passes the request to its upstream backend. Like the proxy, the backend is available in several replicas, so that the task whoami. 3 is ac-cessed at worker 3 for the specific request.

You have now learned how existing services can be upgraded without interruption, how to react to chang-ing load using a one-liner, and how overlay networks can eliminate the need to publish internal ports on an external interface. Other operational details are just as easy to handle, e.g. when the Docker Engines, worker or managers need to be updated or individual nodes need to be replaced. For these use cases, see the relevant notes in the documentation.

For example, a node can be instructed to remove all tasks via  docker node update -availability=drain. Docker will then take care of clearing the node virtu-ally empty, so that you can carry out maintenance work undisturbed and without risk. With  docker swarm leave and docker swarm join you can always remove or add workers and managers. You can obtain the nec-essary join tokens from one of the managers by call-ing docker swarm join-token worker or docker swarm join-token manager.

Docker StackAs already mentioned, it is difficult to keep track of a growing service landscape. In general, Consul or similar tools are suitable for maintaining a kind of registry that provides you with more than just an overview. Tools such as Portainer come with support for Docker Swarm and dashboards that give you a graphical overview of your nodes and services.

Docker offers you a slim alternative in the form of Docker Stack. As the name suggests, this abstraction goes beyond the individual services and deals with the entirety of your services, which are closely interlinked or interdependent. The technological basis is nothing new, because it reuses many elements of Docker Com-pose. Generally speaking: Docker Stack uses Compose’s YAML format and complements the Swarm-specific properties for service deployments. As an example, you can find the stack for the manually created services un-

der nginx-basic/docker-stack.yml. If you want to try it instead of manually setting up services, you must first stop the proxy to release port 8080. The following com-mands ensure a clean state and start the complete stack:

docker service rm proxy whoamidocker network rm app docker stack deploy --compose-file nginx-basic/docker-stack.yml example

The docker stack deploy command receives the de-sired stack description via -compose-file. The name ex-ample serves on the one hand as an easily recognizable reference to the stack and internally as a means of names-pacing the various services. Docker now uses the infor-mation in the docker-stack.yml to generate virtually the equivalent of the docker service create … commands internally and sends them to the Docker Engine.

Compared to Compose, there are only some new blocks in the configuration file – the ones under deploy, which, as already mentioned, define the Swarm-specific properties. Constraints, replicas and update behavior are configured appropriately to the command line pa-rameters. The documentation contains details and other options that may be relevant to your application.

The practical benefit of the stacks is that you can now check in the configuration to your VCS and therefore have complete and up-to-date documentation on the set-up of all connected services. Changes are then reduced to editing this file and the repeated docker stack deploy -compose-file nginx-basic/docker-stack.yml example. Docker checks on every execution of the command if there are any discrepancies between the YAML content and the services actually deployed and corrects them ac-cordingly via internal docker service update. This gives you a good overview of your stack. It is versioned right along the source code of your services and you need to maintain far less error-prone scripts. Since the stack

Fig. 1: A request on its way through overlay networks

Page 9: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

9

abstraction is a purely client-side implementation, you still have full freedom to perform your own actions via manual or scripted docker servicecommands.

If the constant editing of  docker-stack.yml  seems excessive in the context of frequent service updates, consider variable resolution per environment. The placeholder NGINX_IMAGE is already provided in the example stack. Here is the relevant excerpt:

...services:  proxy:    image: „${NGINX_IMAGE:-registry:5000/nginx:latest}“...

With an appropriately prepared environment, you can deploy another nginx image without first editing the YAML file. The following example changes the image for the proxy back to the default image and updates the stack:

export NGINX_IMAGE=nginx:alpinedocker stack deploy --compose-file nginx-basic/docker-stack.yml example

The deployment now runs until the individual in-stances are updated. Afterwards, a  curl  http://local-host:8080 should return to the nginx default page. The YAML configuration of the stack thus remains stable and is adapted only by means of environment variables.

The resolution of the placeholders can be done at any position. In practice, it would therefore be better to keep only the image tag variable instead of the complete im-age.

...services:  proxy:    image: „nginx:${NGINX_VERSION:-alpine}“...

Removing a complete stack is very easy with docker stack rm example.

Please note: all services will be removed without fur-ther enquiry. On a production system, the command can likely be considered dangerous, but it makes handling services for local set-ups and on test stages very conveni-ent.

As mentioned above, the stack uses namespacing based on labels to keep different services together, but it works with the same mechanisms as  docker service…  com-mands. Therefore, it is up to you to supplement a stack initially deployed via docker stack deploy with docker service update during operation.

Secrets and service-configsDocker services and stack offer you more than only the management of tasks across different nodes. Secrets and configs can also be distributed more easily using

Docker Swarm and are more securely stored in only those container file systems that you have authorized, compared to the environment variables recommended at https://12factor.net/.

Basically, Docker Secrets and Configs share the same concept. You first create objects or files centrally in Swarm via docker secret create… or docker config cre-ate…, which are stored internally by Docker – Secrets are encrypted beforehand. You give these objects a name, which you then use when you link them to services.

Based on the previous example with nginx and ex-tracts from the official docker documentation, we can add HTTPS support. Docker Swarm mounts the neces-sary SSL certificates and keys as files in the containers. Secrets only end up in a RAM disk for security reasons. First, you need suitable certificates that are prepared in the repository under nginx-secrets/cert. If you want to update the certificates, a  suitable script nginx-secrets/gen-certs.sh is available.

Docker Swarm allows up to 500 KB of content per se-cret, which is then stored as a file in /run/secrets/. Secrets are created as follows:

docker secret create site.key nginx-secrets/cert/site.key docker secret create site.crt nginx-secrets/cert/site.crt

Configs can also be maintained similarly to secrets. By looking at the example of the individual nginx configu-ration from the beginning of the article, you will soon see that the specially built image will no longer be neces-sary. To configure the nginx, we use the configuration under nginx-secrets/https-only.conf and create it using Docker Config:

docker config create https.conf nginx-secrets/https-only.conf

First, you define the desired name of the config. Then you enter the path or file name, for the contents you want Docker to store in the Swarm. With docker secret ls and docker config ls you can find the newly created objects. Now all that’s missing is the link between the service, and the Swarm Secrets and Config. For example, you can start a new service as follows. Note that the of-ficial nginx image is sufficient:

docker service create \  --detach=false \  --name nginx \  --secret site.key \  --secret site.crt \  --config source=https.conf,target=/etc/nginx/conf.d/https.conf \  --publish 8443:443 \  nginx:alpine

In the browser you can see the result at https://local-host:8443, but you have to skip some warnings because of the self-issued Certification Authority of the server cer-tificate. In this case the check is easier via command line:

Page 10: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

10

curl --cacert nginx-secrets/cert/root-ca.crt https://localhost:8443

Secrets and configs are also supported in Docker Stack. To match the manual commands, the Secret or Config is also declared and, if necessary, created within the YAML file at the top level, while the link to the de-sired services is then defined for each service. Our com-plete example looks like shown in Listing 3 and can be deployed as follows:

cd nginx-secretsdocker stack deploy --compose-file docker-stack.yml https-example

Listing 3version: „3.4“ services:  proxy:    image: „${NGINX_IMAGE:-nginx:alpine}“    networks:      - app    ports:      - „8080:80“      - „8443:443“    deploy:      placement:        constraints:          - node.role==worker      replicas: 3      update_config:        parallelism: 1        delay: 10s      restart_policy:        condition: any    configs:      - source: https.conf        target: /etc/nginx/conf.d/https.conf    secrets:      - site.key      - site.crt  whoami:    image: emilevauge/whoami:latest    networks:      - app    deploy:      placement:        constraints:          - node.role==worker      replicas: 3      update_config:        parallelism: 1        delay: 10s      restart_policy:        condition: on-failure networks:  app:    driver: overlay

 configs:  https.conf:    file: ./https-backend.conf secrets:  site.key:    file: ./cert/site.key  site.crt:    file: ./cert/site.crt

Updating secrets or configs is a bit tricky. Docker cannot offer a generic solution for updating container file systems. Some processes expect a signal like  SI-GHUP  when updating the configuration, others do not allow a reload, but have to be restarted. Docker therefore suggests to create new secrets or configs un-der a new name and replace them with the old versions by docker service update -config-rm -config-add…

Stateful services and volumesIf you want to set up databases via docker service, you will inevitably be asked how the data will survive a con-tainer restart. You are probably already familiar with volumes to address this challenge. Usually, volumes are connected very closely to a specific container, so that both are practically one unit. In a swarm with poten-tially moving containers, such a close binding can no longer be assumed – a container can always be started on another node where either the required volume is completely missing, is empty or even contains obsolete data. From data volumes in the order of several giga-bytes upwards, it is no longer useful to copy or move volumes to other nodes. Of course, depending on the environment you have several possible solutions.

The basic idea is to select a suitable volume driver, which then distributes the data to different nodes or to a central location. Docker therefore allows you to select the desired driver and, if necessary, configure it when creating partitions. There are already a number of plug-ins that connect the Docker Engine to new Volume Drivers. The documentation shows an extensive selec-tion of these plug-ins. You may find the specific NetApp or vSphere plug-ins in your environment appropriate. Alternatively, We recommend the REX-Ray plug-in for closer inspection, as it enjoys a good reputation in the community and it is quite platform-neutral.

Since the configuration and use of the different volume plug-ins and drivers is too specific for your specific envi-ronment, I will not include a detailed description here. Please note that you must use at least Docker 1.13 or in some cases even version 17.03. The necessary docker-specific commands can usually be reduced to two lines, which are listed as examples for vSphere in Listing 4.

docker plugin install \  --grant-all-permissions \  --alias \

Page 11: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

11

  vsphere vmware/docker-volume-vsphere:latestdocker volume create \  --driver=vsphere \  --name=MyVolume \  -o size=10gb \  -o vsan-policy-name=allflash

In addition to installing the plug-in under an alias vs-phere, the second step is to create the desired MyVol-ume volume. Part of the configuration is stored in the file system, while you can configure individual parameters by -o at the time of volume creation.

Proxies with true docker swarm integrationUsing the example of nginx, it was very easy to define statically the known upstream services. Depending on the application and environment, you may need a more dy-namic concept and want to change the combination of the services more often. In today’s microservices environ-ment, the convenient addition of new services marks a common practice. Unfortunately, the static configuration of a nginx or HAProxy will then feel a bit uncomforta-ble. But fortunately, there are already convenient alterna-tives, of which Træfik is probably the most outstanding. Plus, it comes with excellent docker integration!

Equivalent to the first stack with nginx, you will find the same stack with Træfik. Træfik needs access to a Swarm Manager’s Docker Engine API to dynamically adapt its configuration to new or modified services. It is therefore placed on the manager nodes using deploy-ment constraints. Since Træfik cannot guess certain ser-vice-specific settings, the relevant configuration is stored on the respective services through labels.

In our example, you can see how the network configu-ration (port and network) is defined, so the routing will still reach the service, even if it is in multiple networks. In addition, the  traefik.frontend.rule defines for which incoming requests packages should be forwarded to the whoami service. Besides routing based on request head-

ers, you can also use paths and other request elements as criteria. See the Træfik documentation for respective information.

Finally, there are more details on integration with Docker Swarm in the Swarm User Guide. The exam-ple stack is still missing the configuration for HTTPS support, but since Træfik comes with native integration for Let’s Encrypt, We only have to refer to appropriate examples.

ConclusionDocker Swarm offers even more facets than shown, which may become more or less relevant depending on the context. Functions such as scheduled tasks or pen-dants to cron jobs as services are often requested, but currently difficult to implement with built-in features. Nevertheless, compared to other container orchestra-tors, Docker Swarm is still neatly arranged and lean. There are only a few hurdles to overcome in order to quickly achieve useful results.

Docker Swarm takes care of many details as well as the configurable error handling, especially for Con-tinuous Deployment. With Docker Swarm, you don’t have to maintain your own deployment code and you even get some rudimentary load balancing for free. Sev-eral features such as autoscaling can be supplemented via Orbiter and adapted to your own needs. The risk of experimentation remains relatively low because Docker Swarm has little invasive effect on the existing infrastructure. In any case, it’s fun to dive right in with Swarm – whether via command line, YAML-file or di-rectly via Engine-API.

Tobias Gesellchen is developer at Europace AG and Docker expert, who likes to focus on DevOps cultural and engineering wise.

Page 12: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

12

Top Docker Tips From 12 Docker Captains

DOCKER TIP #1Ajeet Singh Raina is Senior Systems Deve-lopment Engineer at DellEMC Bengaluru, Karnataka, India. @ajeetraina

How do you use Docker?Ajeet Singh Raina: Inside DellEMC, I work as Sr. Sys-tems Development Engineer and spend considerable amount of time playing around with datacenter solution. Hardly a day goes by without talking about Docker and its implementation. Be it a system management tool, test certification, validation effort or automation workflow, I work with my team to look at how Docker can sim-plify the solution and save enormous time of execution. Being part of Global Solution Engineering, one can find me busy talking about the possible proof of concepts around datacenter solution and finding the better way to improve our day to day job. Also, Wearing a Docker captain’s hat, there is a sense of responsibility to help the community users, hence I spend most of time keeping close eyes on Slack community questions/discussions and contributing in terms of blog posts almost every week.

Raina’s Docker Tip:Generally, Docker service inspect outputs a huge JSON dump. It becomes quite easy to access individual prop-erties using Docker Service Inspection Filtering & Tem-plate Engine. For example, if you want to list out the port which WordPress is using for specific service:

Docker is great, but sometimes you need a few pointers. We asked 12 Docker captains their top hack for our favorite container platform. We got some helpful advice and specific instructions on how to avoid problems when using Docker. Read on to find out more!

$docker service inspect -f ‘{{with index .Spec.EndpointSpec.Ports 0}}{{.TargetPort}}{{end}}’ wordpressapp

Output:

80

This will fetch just the port number out of huge JSON dump. Amazing, isn’t it?

DOCKER TIP #2Nick Janetakis is Docker Trainer and creator of www.diveintodocker.com. @nickjanetakis

How do you use Docker?Nick Janetakis: I use Docker in development for all of my web applications which are mostly written in Ruby on Rails and Flask. I also use Docker in production for a number of projects. These are systems ranging from a single host deploy, to larger systems that are scaled and load balanced across multiple hosts.

Janetakis’ Docker Tip:Don’t be afraid of using Docker. Using Docker doesn’t mean you need you need to go all-in with every single high scalability buzz word you can think of. Docker isn’t about deploying a multi-data center load balanced cluster of services with green / blue deploys that allow for zero down deploys with seamless continuous integration and delivery.

Page 13: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

13

Start small by using Docker in development, and try de-ploying it to a single server. There are massive advan-tages to using Docker at all levels of skill and scale.

DOCKER TIP #3Gianluca Arbezzano is Page Reliability Engineer at InfluxData Italy. @gianarb

How do you use Docker?Gianluca Arbezzano: I use Docker to ship applications and services like InfluxDB around big cloud services. The container allows me to ship the same application in a safe way. I use Docker a lot to create and manage environments. With Docker Compose I can start a fresh environment to run smoke tests or integration tests on a specific application in a very simple and easy way. I can put it in my pipeline and delivery process to enforce my release circle.

Arbezzano’s Docker Tip:Generally, Docker service inspect outputs a huge JSON dump. It becomes quite easy to access individual prop-erties using Docker Service Inspection Filtering & Tem-plate Engine. For example, if you want to list out the port which WordPress is using for specific service:

docker run -it -p 8000:8000 gianarb/micro:1.2.0

DOCKER TIP #4Adrian Mouat is Chief Scientist at Container Solutions. @adrianmouat

How do you use Docker?Adrian Mouat: My daily work is helping others with Docker and associated technologies, so it plays a big role. I also give a lot of presentations, often running the presentation software from within a container itself.

Mouat’s Docker Tip:I have a whole presentation of tips that I’ll be presenting at DockerConEU! But if you just want one, it would be to set the `docker ps` output format. By default it prints out a really long line that looks messy unless your terminal takes up the whole screen. You can fix this by using the ̀ –format` argument to pick what fields you’re interested in:

docker ps –format \ “table {{.Names}}\\t{{.Image}}\\t{{.Status}}”

And you can make this the default by configuring it in your `.docker/config.json` file.

DOCKER TIP #5Vincent De Smet works as DevOps Engineer at Honestbee, Singapore. @vincentdesmet

How do you use Docker?Vincent De Smet: Docker adoption started out main-ly in the CI/CD pipeline and from there on through staging environments to our production environ-ments. At my current company, developer adop-tion (using containers to develop new features for existing web services) is still lacking as each de-veloper has their own preferred way of working. Given containers are prevalent everywhere else and Docker tools for developers keep improving, it will only take time before developer will choose to adopt these into their daily workflow. I personally, as a DevOps

Also visit this Session:

Es muss nicht gleich Docker sein – IT Automation, die zu einem passtSandra Parsick (Freiberuflerin)

Docker ist in aller Munde und wird gerne als Allheilösung für Deployment Probleme propagiert. Das führt zu der Annahme, automatisierte Deployments wären nur mit

Docker möglich, obwohl Provisionierungswerkzeuge wie Ansible Lösungen außerhalb der Container-Welt anbie-ten. Deren Einsatz wird oft gar nicht in Betracht gezogen, weil irgendwann – in Ferner Zukunft – doch Docker im Unternehmen eingesetzt werden soll. Die Automatisie-rung wird immer weiter verschoben, weil der Aufwand in einem Schritt zu groß ist, obwohl Ansible mit wenig Mühe in der Gegenwart helfen könnte. Die Verwirrung wird dadurch vergrößert, dass die Einsatzszenarien von Provisionierungswerkzeugen und Container-Technologi-en fälschlicherweise vermischt und somit als Konkurrenz betrachtet werden. Dieser Vortrag erklärt anhand von Ansible und Docker worin sich ein Provisionierungswerk-zeug von einer Container-Technologie unterscheidet. Es wird gezeigt, wie Ansible auf dem Weg zu einer Do-ckerisierung der Infrastruktur jetzt schon Probleme lösen kann und wie ein gemeinsamer Einsatz beider Technolo-gien die Vorteile beider Welten kombiniert.

Page 14: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

14

to make your app resilient. After that, move to an ap-plication that uses persistent data. This allows you to progress and move all your applications off of VMs and into containers.

DOCKER TIP #8John Zaccone works as Cloud Engineer and Developer Advocate at IBM. @JohnZaccone

How do you use Docker?John Zaccone: Right now, I work at IBM as a developer advocate. I work with developers from other companies to help them improve their ability to push awesome busi-ness value to production. I focus on adopting DevOps automation, containers, and container orchestration as a big part of that process.

Zaccone’s Docker Tip:I organize a meetup where I interface with a lot of de-velopers and operators who want to adopt Docker, but find that they either don’t have the time or can’t clearly define the business case for using Docker. My advice to companies (and this applies to all new technologies, not just Docker) is to allow developers some freedom to explore new solutions. Docker is a technology where the benefits are not 100% realized until you get your hands on it and understand exactly how it will benefit you in your use case.

engineer in charge of maintaining containerized pro-duction environments as well as improve developer workflows, troubleshoot most issues through Docker containers and use containers daily.

De Smet’s Docker Tip:Make sure to follow “Best practices for writing Dock-erfiles”  – these provide very good reasons on why you should do things a certain way and I see way too many existing Dockerfiles that do not follow these.

Anyone slightly more advanced with Docker will also gain a lot from mastering the Linux Alpine distribution and its package manager.

And if you’re getting started, training.play-with-dock-er.com is an amazing resource

docker run -it -p 8000:8000 gianarb/micro:1.2.0

DOCKER TIP #6Chanwit Kaewkasi is Docker Swarm Maintainer and has ported Swarm to Windows. @chanwit

How do you use Docker?Chanwit Kaewkasi: I help companies in South-East Asia and Europe design and implement their application ar-chitectures using Docker, and deploy them on a Docker Swarm cluster.

Kaewkasi’s Docker Tip:`docker system prune -f` always make my day.

DOCKER TIP #7Kendrick Coleman is Developer Advocate for {code} by Dell EMC. @kendrickcoleman

How do you use Docker?Kendrick Coleman: Docker plays a role in my daily job. I am eager to learn the innards to find new corner cases. It makes me excited to know I can turn knobs to make applications work the way I want. There is a misconception that persistent applications can’t or shouldn’t run in containers. I’m proud that the team I work with builds tools to make running persistent ap-plication easy and seamless that can be integrated as a part of a tool chain.

Coleman’s Docker Tip:Start off easy. Always go for the low hanging fruit like a web server and make it work for you. Then take your single host and pick an orchestrator and use that

Also visit this Session:

Infrastructure as Code: Build Pipe-lines with Docker and TerraformKai Tödter (Siemens AG)

Many software projects use build pipe-lines including tools like Jenkins, Sonar-Qube, Artifactory etc. But often those pipeline tools are installed and main-

tained manually. There are certain risks with this ap-proach and in case of failure it often takes a long time to have a running pipeline again. This session shows how to automate the creation of a build pipeline. With Terra-form, a Docker infrastructure is created at AWS, where Jenkins, SonarQube and Artifactory are pre-configured and deployed. The pipeline is ready for operation in just a few minutes, as Kai will demonstrate in a live demo.

Page 15: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Docker

15

DOCKER TIP #9Nicolas De Loof is Docker enthusiast at CloudBees. @ndeloof

How do you use Docker?Nicolas De Loof: For my personal use I rely on Docker for various tests, so I ensure I have a reproducible en-vironment I can share with others, as well as prevent impacts on my workstation. My company also offers a Docker based elastic CI/CD solution “CloudBees Jen-kins Enterprise” and as a Docker expert I try to make it adopt best Docker features.

De Loof’s Docker Tip:Considering immutable infrastructure, there’s many middleware who use filesystem as a cache, and on might want to avoid making this persistent. So I like to con-strain them running as a read-only container (docker run –read-only) to know exactly where they need to ac-cess filesystem, then to create a volume for the actual persistent data directory and a tmpfs for everything else, typically caches or log files.

DOCKER TIP #10Lorenzo Fontana is DevOps expert at Kiratech. @fntlnz

How do you use Docker?Lorenzo Fontana:  My company is writing an open source for Docker and other containeri-zation technologies, I’m also daily involved in Docker doing mainly reviews on issues and PRs. I do a lot of consultancy to help companies using Contain-ers and Docker by reflection. I used Docker for a while to spawn GUI software on my computer, and then I switched to systemd-nspawn. In the future, I’ll probably go to runc.

Fontana’s Docker Tip:Not many people already know about multi staged builds, another cool thing is the fact that now Docker handles configs and secrets. Also a lot happens in the implementa-

tion, just get one project under the Docker or the Moby organizations on GitHub, there are a lot of implemented things that can open your eyes on how things works.

DOCKER TIP #11Brian Christner is Cloud Advocate and Cloud architect at Swisscom. @idomyowntricks

How do you use Docker?Brian Christner: I personally use Docker for every new project I’m working on. My personal blog runs Docker and the monitoring projects I’m working on to creating applications for IoT on RasperryPi’s. At work, Docker is being used across several teams. We use it to provi-sion our Database as a Service offerings and for devel-opment purposes. It is very versatile and used across multiple verticals within our company. Here is one of our use cases that is on Docker’s website –“Swisscom goes from 400 vms to 20 vms, maximizing infrastruc-ture efficiency with Docker”.

De Loof’s Docker Tip:I share all my favorite tips via my blog.

DOCKER TIP #12Antonis Kalipetis is CTO at SourceLair, a Docker based Online-IDE. @akalipetis

How do you use Docker?Antonis Kalipetis: I use Docker for all sorts of things, as a tool to create awesome developer tools at SourceLair, in my local development workflow and for deploying production systems for our customers.

Kalipetis’ Docker Tip:My tip would be to always use Docker Swarm, or an-other orchestrator, for deployment, even if you have a single machine “cluster”. The foundations of Swarm are well-thought and works perfectly on just one ma-chine, if you’re not using it because you don’t have a “big enough” cluster, you’re shooting yourself in the foot.

Page 16: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 16

Kubernetes Basics

How to build up-to-date (container) applications

By Timo Derstappen

For some people, it is a replacement for automation and configuration management tools – leaving complex imperative deployment tools behind and moving on to declarative deployments, which simplify things but grant full flexibility to developers nonetheless.

Kubernetes not only represents a large projection area. It is currently one of the most active open source projects and many large and small companies are working on it. Under the cover of the Cloud Native Computing Foundation, which belongs to the Linux Foundation, a large community is organizing itself. Of course, the focus is on Kubernetes itself, but other pro-jects such as Prometheus, OpenTracing, CoreDNS and Fluentd are also part of the CNCF by now. Essentially, the Kubernetes project is organized through  Special Interest Groups (SIGs). The SIGs communicate via Slack, GitHub and weekly meetings, for everyone to attend.

In this article, the focus is less on the operation and internals of Kubernetes than on the user interface. We explain the building blocks of Kubernetes to set up our own application or build pipelines on a Kubernetes cluster.

OrchestrationThe resource distribution on a computer is largely reserved for the operating system. Kubernetes performs a similar role in a Kubernetes cluster. It manages resources such as memory, CPU and storage, and distributes applications and services to containers on cluster nodes. Containers themselves have greatly simplified the workflow of devel-opers and helped them to become more productive. Now Kubernetes takes the containers into production. This global resource management has several advantages, such as the more efficient utilization of resources, the seamless scaling of applications and services, and more importantly a high availability and lower operational costs. For or-chestration, Kubernetes carries its own API, which is usu-ally addressed via the CLI kubectl.

The most important functions of Kubernetes are:

• Containers are launched in so-called pods.• The Kubernetes Scheduler assures that all resource

requirements on the cluster are met at all times.• Containers can be found via services. Service Discov-

ery allows cluster distributed containers to be ad-dressed by name.

• Liveness and readiness probes continuously monitor the state of applications on the cluster.

A system such as Kubernetes can be viewed from different angles. Some think of it in terms of infrastructure, as the successor to OpenStack, although the infrastructure is cloud-agnostic. For others, it is a platform which makes it easier to orchestrate microservice architectures — or cloud-native architectures, as they are called nowadays — to deploy applications more easily, plus making them more resilient and scalable. 

Page 17: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 17

• The Horizontal Pod Scaler can automatically adjust the number of replicas based on different metrics (e. g. CPU).

• New versions can be rolled out via rolling updates.

Basic conceptsThe rather rudimentary described concepts below are typ-ically needed to start a simple application on Kubernetes.

• Namespace: Namespaces can be used to divide a clus-ter into several logical units. By default, namespaces are not really isolated from each other. However, there are certain ways to restrict users and applica-tions to certain namespaces.

• Pod: Pods represent the basic concept for managing containers. They can consist of several containers, which are subsequently launched together in a com-mon context on a node. These containers always run together. If you scale a pod, the same containers are started together again. A pod is practical in that the user can run processes together; processes which originate from different container images, that is. An example would be a separate process which sends a services logs to a central logging service.In the com-mon context of a pod, container memory can share network and storage. This allows porting applica-tions to Kubernetes which had previously run togeth-er in a machine or VM. The advantage is that you can keep the release and development cycles of the individual containers separate. However, developers should not make the mistake of pushing all processes of a machine into a pod at once. As a result, it would lose the flexibility of distributing resources in the cluster evenly and scale them separately.

• Label: One or more key/value pairs can be assigned to each resource in Kubernetes. Using a selector, corresponding resources can be identified from these pairs. This means that resources can be grouped by labels. Some concepts such as services and Replica-Sets use labels to find pods.

• Service: Cubernetes services are based on a virtual construct – an abstraction, or rather a grouping of existing pods, which are matched using labels. With the help of a service, these pods can then, in turn, be found by other pods. Since pods themselves are very volatile and their addresses within a cluster can change at any time, services are assigned specific virtual IP addresses. These IP address can also be resolved via DNS. Traffic sent to these addresses is passed on to the matching pods.

• ReplicaSet: A ReplicaSet is also a grouping, but instead of making pods locatable, it’s to make sure that a certain number of pods run in the cluster al-together. A ReplicaSet notifies the scheduler on how many instances of a pod are to run in the cluster. If there are too many, some will be terminated until the designated number is reached. If too few are running, new pods will be launched.

• Deployment: Deployments are based on ReplicaSets. More specifically: Deployments are used to manage ReplicaSets. They take care of starting, updating, and deleting ReplicaSets. During an update, deployments create a new ReplicaSet and scale the pods upwards. Once the new pods run, the old ReplicaSet is scaled down and ultimately deleted. A Deployment can also be paused or rolled back.

• Ingress: Pods and services can only be accessed within a cluster, so if you want to make a service accessible for external access, you have to use another concept. Inbound objects define which ports and services can be reached externally, but unfortunately: Kubernetes in itself does not have a controller which uses these objects. However, there are some implementations within the community, the so-called ingress control-lers. A quite typical Ingress controller is the nginx Ingress Controller.

• Config Maps and Secrets: Furthermore, there are two concepts for configuring applications in Kubernetes. Both concepts are quite similar, and typically the configurations are passed to the pod using either the file system or environment variables. As the name suggests, sensitive data is stored in secrets.

Also visit this Session:

Kubernetes PatternsDr. Roland Huß (Red Hat)

The way we design, develop, and run ap-plications on cloud native platforms like Kubernetes differs significantly from the traditional approach. When working with

Kubernetes, there are fewer concerns for developers to think about, but at the same time, there are new patterns and practices for solving every-day challenges. In this talk, we will look at a collection of common patterns for developing cloud native applications. These patterns encapsulate proven solutions to common problems and help you to prevent inventing the wheel again. After a short introduction into the Kubernetes platform we will look at the following pattern categories:

• Foundational patterns which build the basis of the Kubernetes platform

• Behavioral patterns describe concepts for different types of applications

• Structural patterns are for structuring your cloud native application

• Configuration patterns provide various approaches to application configuration

In the end, you will have a solid overview how com-mon problems can be solved when developing cloud native application for Kubernetes.

Page 18: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 18

An exemplary applicationFor deploying a simple application to a Kubernetes cluster, a deployment, a service, and an ingress object is required. In this example, we issue a simple web server which responds with a Hello World website. The deploy-ment defines two replicas of a pod with respectively one container of giantswarm/helloworld. Both the deploy-ment and the pods are labeled helloworld, while the de-ployment is located in a default namespace (Listing 1).

apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: helloworld labels: app: helloworld namespace: defaultspec: replicas: 2 selector: matchLabels: app: helloworld template: metadata: labels: app: helloworld spec: containers: - name: helloworld image: giantswarm/helloworld:latest ports:- containerPort: 8080

To make the pods accessible in the cluster, an appro-priate service needs to be specified (Listing 2). This ser-vice is assigned to the default namespace as well and has a selector on the label helloworld.

apiVersion: v1kind: Servicemetadata: name: helloworld labels: app: helloworld namespace: defaultspec: selector: app: helloworld

All that is missing now is that the service should be ac-cessible externally. Therefore, the service receives an ex-ternal DNS entry, whereby the clusters Ingress controller then forwards the traffic, which carries this DNS entry in its host header, to the helloworld pods (Listing 3).

apiVersion: extensions/v1beta1kind: Ingressmetadata: labels: app: helloworld name: helloworld namespace: defaultspec: rules: - host: helloworld.clusterid.gigantic.io http: paths: - path: / backend: serviceName: helloworld servicePort: 8080

Note: Kubernetes itself does not carry its own Ingress controller. However, there are some im-plementations: nginx, HAProxy, Træfik. Professional tip: If there is a load balancer prior to the Kubernetes cluster, it is usually set up so that the traffic is forwarded to the Ingress controller. The In-gress controller service should then be made available on all nodes via NodePorts. Cloud providers typically use the LoadBalancer type. This type ensures that the cloud provider extension of Kubernetes automati-cally generates and configures a new load balancer. These YAML definitions can now be stored in individu-al files or collectively in a file, and loaded onto a cluster with kubectl.

kubectl create -f helloworld-manifest.yaml

The sample code is on GitHub.

HelmIt is possible to file YAML files together in Helm Charts, which helps to avoid a constant struggle with single YAML files. Helm is a tool for the installation and

Also visit this Session:

Kube-Node: Let your Kubernetes Cluster auto-manage its Nodes

Guus van Weelden (Loodse)

Kube-Node is a community project to enable generic node management for Kubernetes. The objective is to provide

developers with a simple way to scale clusters without operations intervention and regardless of the underlying infrastructure. This talk will introduce the concept of No-deClass and NodeSet which enable you to utilize kubectl to deploy your nodes. Additionally, you will learn how to enhance the concept for specific requirements with your own controller. https://github.com/kube-node

Page 19: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 19

management of complete applications. Furthermore, the YAML files are also incorporated as templates into the Charts, which makes it possible to establish differ-ent configurations. This allows developers to run their application on the same chart in a test enviroment, but with a different configuration in the production enviro-ment. This means that, if the cluster’s operating system is Kubernetes, then Helm is the package management. Al-though, Helm does need a service called Tiller, which can be installed on the cluster via helm init. The following commands can be used to install Jenkins on the server:

helm repo update helm install stable/Jenkins

The Jenkins chart will then be loaded from GitHub. There are also so-called application registries, which can manage charts, similar to container images (for ex-ample quay.io). Developers can now use the installed Jenkins to deploy their own Helm Charts, although this does require the installation of a Kubernetes-CI-Plug-in for Jenkins. This will result in a new Build Step, which can deploy the Helm Charts. The plug-in automatically creates a Cloud configuration in Jenkins and also confi-gures the login details for the Kubernetes API.

More conceptsDistributed Computing software can be challenging. This is the main reason for Kubernetes, to provide even more concepts, as to simplify the construction of such architectures. In most cases, the modules are special var-iations of above described resources. It is also possible to use them to configure, isolate or extend resources.

• Job: Starts one or more pods and secures their suc-cessful delivery

• Cron Job: Starts a Job in a specific or recurring time-frame

• DaemonSet: Sees to it, that Pods are distributed to all (or only a few determined) nodes.

• PersistentVolume,PersistentVolumeClaim: Definition of the storage medium in the cluster and the assign-ment to Pods.

• StorageClass: Does define the cluster’s available sav-ing options

• StatefulSet: Similar to Replica Sets, it does start a spe-cific amount of Pods. These though do have a speci-fied and identifiable ID, which will still be assigned to the Pod even after a restart or a relocation, which is useful for libraries.

• NetworkPolicy: Allows the definition of a set of rules, which does control the networking attempts in a Cluster.

• RBAC: Role-based access control in a Cluster.• PodSecurityPolicy: Defines the functionality of cer-

tain Pods, for example, which a host’s resources can be accessed by a container.

• ResourceQuota: Restricts usage of resources inside a Namespace.

• HorizontalPodAutoscaler: Scales Pods, based on the Cluster’s metrics.

• CustomResourceDefinition: Extends and adds a cus-tom object to the Kubernetes AI. With CustomCont-roller, these objects can then also be managed within the Cluster (see: Operators)

In this context, one should not forget that the com-munity is developing many tools and extensions for Ku-bernetes. The Kubernetes incubator currently contains 27 additional repositories and many other software projects offer interfaces for the Kubernetes API or are already equipped with Kubernetes manifestos.

ConclusionKubernetes is a powerful tool and the sheer depth of every single concept is just impressive. Though it prob-ably will take some time to get a clearer overview of the tool’s possible operations. It’s still very important to mention, how all of its concepts are build upon each other so that it is possible to form building blocks, which then can be combined into whatever is needed at the time. This is one of the main strong points Ku-bernetes has, in contrast to regular frameworks, which abstract run times and processes and press applications into a specific form. Kubernetes grants a very flexible de-sign in this regard. It is a well-rounded package of IaaS and Pass, which can draw upon Google’s many years of experience in the field of distributed computing. This experience can also be seen in the project’s contributors, who were able to apply their knowledge to it, due to learning from mistakes, which were made in previous projects, like the OpenStack, CloudFoundry and Mesos project. And today Kubernetes is widely spread in its use, all kinds of companies are using it, from GitHub and OpenAI to even Disney.

Timo Derstappen is co-founder of Giant Swarm in Cologne. He has many years of experience in building scalable and automated cloud architec-tures and his interest is mostly generated by light-weight product, process and software development concepts. Free software is a basic principle for him.

Page 20: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 20

Interview with Nicki Watt, CTO at OpenCredo

Taking the pulse of DevOps: “Kubernetes has won the orchestration war”

By Gabriela Motroc

JAXenter: What are your DevOps predictions for 2018? What should we pay attention to?

Nicki Watt: The increasing adoption of complex dis-tributed systems, underpinned by microservices and serverless architectures is resulting in systems with more unpredictable outcomes. I believe the next wave of DevOps practices and tooling will look to address these challenges by focusing on reliability, as well as gaining more intelligent, runtime insight. I see disciplines like Chaos Engineering, and toolchains optimized for runt-ime Observability becoming more prevalent.

I also believe there is a very real skills shortage in the DevOps space. This will increasingly incentivize organi-zations to offload their “DevOps” responsibility to com-moditized offerings in the cloud. For example, migrating from bespoke, in-house Kubernetes clusters to a PaaS offering from cloud vendors (e.g. EKS, GKE, AKS).

JAXenter: What makes a good DevOps practitioner?Nicki Watt: Let’s be honest, technical competence is

a key factor. To be truly effective, however, you need a combination of technical competence and human empa-

thy. Being able to appreciate the fundamental technical and human concerns of your colleagues goes a long way in helping you to become a key part of a team that can drive and deliver change.

JAXenter: Will DevOps stay as it is now or is there a chance that we’ll be calling it DevSecOps from now on?

Nicki Watt:  I have always seen security as a core component of any DevOps initiative. As security tools and processes become more API driven and automation friendly, we will begin to see more aspects being incor-

Should you pay more attention to security when drafting your DevOps approach? Is there a skills shortage in the DevOps space? Will contain-ers-as-a-service become a thing in 2018? We talked with Nicki Watt, CTO at OpenCredo about all this and more. 

Nicki Watt is a techie at heart and CTO at OpenCredo. She has experi-ence working as an engineer, devel-oper, architect and consultant across a

broad range of industries including within Cloud and DevOps. Whether programming, architecting or trou-bleshooting, her personal motto is “Strive for simple when you can, be pragmatic when you can’t”. Nicki is also co-author of the book Neo4j in Action, and can be seen speaking at various meetups & conferences.

Page 21: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 21

porated into pipelines and processes. Whatever we call it, as long as we build security in from the beginning, that’s all that matters!

JAXenter: Do you think more organizations will move their business to the cloud in 2018? 

Nicki Watt: Yes, for a few of reasons, but I shall elab-orate on just two.

Security concerns have been a significant factor hold-ing organizations back from adopting the cloud, but this is changing. Education, as well as active steps taken by cloud vendors to address security concerns, have allowed previously security wary organizations to be enticed into action. Additionally, I believe hearing cloud success sto-ries from traditional enterprises (at conferences etc.) acts to remove barriers. It emboldens others in similar situa-tions to (re)consider what benefits it may bring them.

The ability to innovate, experiment and scale quickly is something which the cloud excels at. Whilst running production workloads may still be a step too far for some organizations, many are prepared to start using the cloud for experimentation, and dev/test workloads. As more familiarity and experience is gained, produc-tion workloads, in time, will also be conquered.

JAXenter: Will containers-as-a-service become a thing in 2018? What platform should we keep an eye on?

Nicki Watt: I believe so. Managing complex distrib-uted systems is hard. The shortage of good skills, and desire to focus available engineering effort on adding genuine business value, makes CaaS a good option for many organizations.

The key differentiator between CaaS platforms is the orchestration layer and herein lays the choice. In my opin-ion, all other things considered equal, Kubernetes has won the orchestration war. As part of the CNCF — and backed by a myriad of impressive organizations —, the Kuber-netes platform provides a consistent, open, vendor-neutral way to manage & run your workloads. It is also available in various CaaS forms from the major cloud vendors now.

JAXenter: Is Java ideal for microservices develop-ments? Should companies continue to invest resources in this direction?

Nicki Watt: Absolutely, no, maybe … it depends. Any technology choice involves tradeoffs and the lan-guage you choose to write your microservices in is no different. One of the benefits of microservices is that you should be able to mix and match. Whatever is most appropriate, and I don’t see why Java should not be in the mix.

In its favor, Java has a large ecosystem of supporting tools and frameworks out there, including those sup-porting microservice architectures (SpringBoot, Drop-Wizard etc). Recruitment wise, Java developers are also far easier to get hold of. It is not however without its critics; too verbose, too slow & heavy on resources, especially for short-running processes. In these cases, maybe an alternative would be better.

The question for me is, what are you optimizing for? Are you planning on running 100’s of microservices or 10’s? Are you latency, memory or process startup sensi-tive? What does your workforce and current skill base look like? And a crucial one, especially for enterprises, what freedom are you willing, or not, to give develop-ment teams? The answer lies in the grey intersection of the response to questions such as these.

JAXenter: Containers (and orchestration tools) are all the rage right now. Will general interest in containers grow this year? 

Nicki Watt:  Yes I think so. Containers of-fer a great simplified packaging and deploy-ment strategy and whilst serverless is also on the charge, I see interest in containers continuing. In terms of handling older applications, not everything has to be implemented in containers; this depends on business objectives and requirements. Sometimes a com-plete rewrite is required but progression along slightly gentler evolutionary tracks is also a good option.

For example: carve monolithic applications up, im-plementing only the parts in new tech where it makes sense. Alternatively, merely being able to get out of a data center and into the cloud, even on VM’s as a first pass, could yield great business returns too.

Also visit this Session:

Kubernetes Security: from Image Hygiene to Network PoliciesMichael Hausenblas (Red Hat)

This talk provides an overview of security concerns in the context of Kubernetes. We will focus on security best practices as well as tooling from a

developer’s point of view. The goal is to familiarise developers with security features and provide sugges-tion around the following areas:

• container image hygiene (how to select base images, OpenSCAP, etc.)

• handling sensitive data (secrets, auditing)

• non-privileged containers (based on http://cani-haznonprivilegedcontainers.info and PodSecurity-Policy)

• using Kubernetes RBAC (service accounts, default roles, securing your app)

• service communication control (Network Policies, Istio)

All best practices/recipes will be made available via a GitHub repo and I’ll demo some of them live.

Page 22: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Kubernetes

www.devopsconference.de 22

JAXenter: What challenges should Kubernetes address in 2018?

Nicki Watt:  As Kubernetes-based CaaS offers in-crease, it would be nice to see the community concen-trating on how the security of the cloud providers is better integrated and offered through the Kubernetes platform.

JAXenter: How will serverless change in 2018? Will it have an impact on DevOps? 

Nicki Watt:  Adoption-wise serverless is still pretty new, so it’s early days to make strong predictions. One obvious way I see it evolving is by supporting broader language and option support. e.g. as already seen by AWS Lambda support for Golang.

I still observe that people have a hope that serverless will usher in a “NoOps” era — i.e. one where they don’t have to worry about operations at all — it will magically happen! The reality is that people land up acquiring an “AlternativeOps” model. Serverless can magnify many distributed system challenges; for example, there tend to be more processes than say, compared to a microservices architecture. They also often have a temporal (limited time to run) angle to them. Whilst there may be less low-level config going on, there will be more at the API, inter-process and runtime inspection level (logging, tracing and debugging). I believe more DevOps processes and tooling will need to focus on providing cohesive intelligence and insight into the runtime aspects of such systems.

JAXenter: Will serverless be seen as a competitor to container-based cloud infrastructure or will they some-how go hand in hand?

Nicki Watt: I see them more as options in your ar-chitectural toolbox. Each offers a very different ar-chitectural approach and style, and have different trade-offs.  Sometimes all you will need is a hammer. Other times, a quick-fire nail gun, other times a bit of both.

Context is always key and your resulting architecture should evolve based on questions like Do you need long-running processes? Are you latency and/or cost sensi-tive? Is this an event-driven system? etc.

Architectures also change and evolve. The only ap-proach I would definitely not recommend is one where a decision to go in some direction is made up front, at a high level, without considering context.

JAXenter: Could you offer us some tips & tricks that you discovered this year and decided to stick to? 

Nicki Watt: More a principle than tip or trick per se but one I feel more strongly about as time goes on: “In-vest your engineering effort in what matters most and adds value, offload the rest”.

Choose to concentrate your engineering resources on work which actually adds business value. Where some-one else (cloud provider or SaaS) have competently demonstrated the ability to manage and run complex supporting infrastructure type of resources, and it fits (or you can adjust to make it fit) your requirements, let them do it.

A specific simple example, in this case, is using some-thing like AWS RDS instead of running your own HA RDBMS setup on VMs, but there are many more (K8S clusters, observability platforms etc.). In my opinion, this approach saves time and effort and gives you (and your investors) more bang for your buck than trying to do it yourself.

Thank you very much!

Also visit this Session:

Kubernete-Architektur 101Erkan Yanar (linsenraum.de)

Dieser Talk ist kein reiner Hands-on. Worum geht es? Es wird der Aufbau von Kubernetes erklärt. Was ist ein Kubelet, Api-Server etc. und wie kommunizieren

diese und ermöglichen es, unsere (Micro)Services/Con-tainer zu betreiben? Am Ende der Session soll klar sein, wie Kubernetes funktioniert und warum wir Kubernetes nicht nur als Orchestrator, sondern als unsere neue und ausschließliche Infrastruktur für Docker sehen sollten. Buzzwords:

• etcd

• controller-manager

• api-server

• scheduler

• kubelet

• kube-proxy

• DNS

• Monitoriing/Logging

• ingress

• Pods/Cron[Jobs]

• Services

Page 23: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Continuous Delivery

www.devopsconference.de 23

Interview with Clemens Utschig-Utschig

OpenShift, Kubernetes & Jenkins: „We wanted to show a start-up can develop entirely within the cloud“ Continuous delivery and complete integration in the cloud? Yes, that’s possible, according to Clemens Utschig-Utschig, CTO and Head of Engineering at BI X GmbH. We caught up with him to talk about why they chose to combine OpenShift Kubernetes, Jenkins and AWS and what problems occurred during the development of the CI pipeline.

JAXenter: Hi Clemens! Thank you for taking the time to answer my questions. In your session at DevOpsCon 2018, you will talk about your experiences during the development of your Continuous Integration Pipeline at BI X GmbH. The first question would be: what elements does it consist of?

Clemens Utschig-Utschig: Hello Dominik, many thanks for the invitation. First of all, I would like to make it clear that my session is not only about the CI platform or pipeline of BI X — Boehringer Ingelheim’s digital startup — but also about our parent company’s CI platform or pipeline. We pay attention to replication so what we do with BI X must also work properly in a global IT environment with 1500 people.

Now regarding the components, we do source code management in Atlassian Bitbucket, our container management on OpenShift with Kubernetes and Jen-kins as a container directly in the middle of it. Every-thing is tightly interlocked, and as a developer, you only need to take care of code — the rest, including deployment and (where desired) environment provi-sioning, is done by the CI.

Clemens Utschig-Utschig works for Boehringer-Ingelheim’s IT organization – responsible for the global technology strategy. He is the CTO and head of

engineering at BI X (bix-digital.com) – a digital startup incubator. Prior he headed up Marketing and Sales Architecture, enabling the digital revolution and the global master data management program, end to end from build to run, within Global Business Services, the internal Shared Service Center of BI. Before joining BI, he worked at the Oracle Headquarters in the United States as platform architect, working with the Fusion Applications development as advisor and supported customers all around the world on their journey towards implementing enterprise-wide SOA. During his platform engineering years, Clemens was responsible for cross product integration, strategic standards as well as a member to the Oracle’s SOA platform steering committee, and served on the OA-SIS TC for Service Component Architecture (SCA).

Page 24: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Continuous Delivery

www.devopsconference.de 24

Prior to this, we implemented an archetype manage-ment system to ensure that technologies and components are set up and used in accordance with the standard. From the idea of a new component (e. g. a microservice) to the first deployment, it takes less than three minutes. The same applies to mobile applications with a pipeline-based build.

JAXenter: Why did you choose this combination of OpenShift, Jenkins, and AWS?

Clemens Utschig-Utschig:  It was important, espe-cially with regard to the transfer of assets and knowl-edge, that pilots of BI X are easily transferable to our parent company. Accordingly, we rely on widespread technologies that many developers know and have been mastering for years. Neither Jenkins nor Kuber-netes is a tool that enjoys little recognition. It was a compromise between cutting-edge infrastructure and proven technology.

We also wanted to use BI X to show that a startup can develop completely in the cloud. Software-defined everything (SDE) was also a must, especially since this is where the future lies: We want to move away from ordering servers and setting up an OS to “Describe in configuration — and have the platform make it work”.

JAXenter: Can you give our readers a glimpse at the problems that have arisen in the process of establish-ing your continuous integration pipeline?

Clemens Utschig-Utschig: Well, most developers have worked with Jenkins before. However, in combination with OpenShift and Kubernetes, there is a lot to con-sider because there are many things that do not work as well with OpenShift (version 3.7, as of today) as they do when you only use Jenkins. This starts with memory and scheduling and ends with secret management.

In between, there are pods that simply disappear and a very simple Kubelet scheduler — and quickly 50 people in four teams have fun and enjoy things they

shouldn’t care about. Every now and then, this reminds me of the beginning of OSGi. There, the problems were bad logging, lousy error messages and almost magical “Dependency & Memory Management”.

Another topic is certainly the experience with Open-Shift. Many developers are up-to-speed with Docker and are irritated by the OpenShift overhead, which but helps a lot in container and microservice-management. Security and the management of volumes are more problematic: here, it is just a matter of training and a good portion of curiosity.

JAXenter: How did you solve the individual problems?Clemens Utschig-Utschig: Just come into my session

and you’ll find out… ;) Fun aside: Much of it is simple and undocumented and even with a wealth of experi-ence in Kubernetes, it’s not so easy to solve. Ultimately, it was a mix of attentive bug fixing and contributors from OpenShift and Kubernetes who helped us. This has resulted in an extensive knowledge base, which I will present.

JAXenter: What insight should every visitor of your ses-sion take home?

Clemens Utschig-Utschig: A) It works; and b) Once it does, it’s super cool and makes the developer life much easier.

Also visit this Workshop:

Workshop: From Zero to Continuous Integration and Continuous DeliveryNir Koren (LivePerson)

Continuous Integration (CI) and Con-tinuous Delivery (CD) are development practice of applying small code changes frequently. It’s well known that it be-

comes more and more essential to any agile-based or-ganization. This workshop will help you to understand the CI/CD concepts, mindset, and how to implement the practices that help to form the DevOps culture to implement CI/CD for software development. We will go over Git and GitHub, Maven basics and Jenkins installation and configuration. The final goal is to have a CI/CD process on a “Hello World” project that runs for each change in a Github.com demo repository. This workshop is mainly for:

• Software Developers and Test Engineers

• Scrum Masters, Product Owners

• Whoever is interested to know about implementing CI/CD

Also visit this Session:

Continuous Integration with Jenkins and OpenShift – Stories of BI XClemens Utschig-Utschig (Boehringer Ingelheim RCV GmbH & Co KG)

This talk is about our learnings, using OpenShift on AWS with Jenkins – what worked and all the things we had to pain-fully learn and discover.

Join me on the ride of provisioning, pipeline develop-ment, vanishing pods – and non-reproducible errors – and use our learnings to make it smooth for you.

Page 25: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER DevOps Culture

www.devopsconference.de 25

Interview series with DevOps influencers – Part 1 

Collaboration or survival of the fittest: Who runs the DevOps world?DevOps is all about collaboration but it’s not always easy to put theory into practice. After we’ve solved this dilemma, we need to get past the “What is DevOps?” question and answer “Where do we start?” instead. We invited nine DevOps influencers to clear things up for you. 

DevOps: Collaboration or survival of the fittest?

DevOps is all about collaboration — at least in theory. In reality, even though it produces great results, “collab-oration comes at a price,” according to Matthew Skel-ton,  Head of Consulting at Conflux. What goes into a good team structure? Should we deliberately introduce some sort of boundary between teams to make sure one does not overpower the other? The answer is more nu-anced than that. Case in point:

Drama aside, to all my Dev colleagues out there, please be considerate and stop killing your Ops partners.

— Arvind Soni, VP of Product at Netsil

DevOps doesn’t have to turn into a fight for suprema-cy. If you follow these key conditions, your team should be able to play nicely, according to an Accenture blog post published a couple of years ago.

1. Goal definition is the first step en route to team col-laboration.

2. One team approach is needed to build and inspire trust and mutual respect in your teams.

3. Diversity is key. Once you succeed in developing a closely knit team, the next step is to build sensitiv-ity around diversity as the teams have professionals from different regions and cultures.

4. A clear roadmap defines your path to achieving the objectives. Outline everyone’s roles and responsibili-ties, and how each team member’s work fits into the bigger picture.

In the first part of this interview series, we talked with nine DevOps influencers about the DevOps show and who’s really leading it. Plus, since the focus is slowly shifting from “What is DevOps?” to Where do we start?” we invited our DevOps heroes to clear things up for you. 

9 answers: Who is leading the DevOps show? Devs or ops?

Charity Majors: The center of gravity is moving to software engineers, who sit in the middle of a mess of internal and external APIs and services, trying to craft something meaningful out of it. Software engineers are who we should be building for … not least because there’s no such thing as “operators” anymore. Ops en-gineers write software too.

Ops isn’t going away, but ops increasingly lives on the other side of an API, instead of sitting next to you at work.  This is great news — you get to rent world-class operations talent by using companies like AWS, Fastly, and other infrastructure providers, talent that you likely could never recruit and hire yourself.

Mike D. Kail: The first tenet of DevOps is “Collabo-ration”, meaning that it’s about self-organizing teams and moving away from the concept of an individual or group “leading the show”.

John Arundel:  The point is that they’re the same people, whether they realize it or not. Developers are intimately responsible for how their code performs in production; good developers relish that responsibility because direct feedback (such as being on call) makes better software and better developers. Meanwhile, operators are the people who write the code which provisions the infrastructure, deploys the software, monitors the services, and so on; they’re just as much

Page 26: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER DevOps Culture

www.devopsconference.de 26

developers as the developers, but they work on a differ-ent codebase. Developers are operating, and operators are developing; that’s DevOps.

Gregory S. Bledsoe: Here’s a DevOps secret:  Mandates from one person or group to another don’t work. When you hand down an edict, you automatically engender at best indifference and at worst passive-aggressive resist-ance (and sometime macro-aggressive resistance!)  In this case, the only entity that cares about whether the new process or tool works is the person or group that handed down the edict, and everyone else is checked out.  

One of the two core ideas that started DevOps is Collaboration. Collaboration means negotiation, com-promise, and diplomacy leading to all the stakeholders feeling like owners of the entire process and toolset, and the outcome achieved. In this case, everyone is motivat-ed to solve resulting implementation problems. This is the genius of Deming’s point: turn everyone into agents of transformation.

To bring this back to the answer to the question: If any one person or group is pushing DevOps onto others, it isn’t DevOps.

Jérôme Petazzoni:  It takes two to tango, so I’d say both! The best developers are the ones who know opera-tions (and how to write code with operations in mind). The best operators are the ones who know development (and how to automate their jobs). The point of DevOps is to make sure that both sides can (and actually do!) talk to each other. In some organizations, the pull to DevOps will come from developers (who are happy and eager to participate in deployment because it enables them to do a better job), in some organizations, the pull will come from operators (who are happy and eager to share the burden with developers to empower them). But you need buy-in from both sides.

Thorsten Heller: From our experience, we’d say devel-opers have taken the driver’s seat position in DevOps. Might be a natural consequence due to the fact the op-erators often are that busy with keeping things alive, or with firefighting whereas a developer’s mindset might be more open to the new things.

Eric Vanderburg:  It depends on the company and the culture. One element of the culture that can be an indicator of leadership frame of reference is where sen-ior management got their start. Those that primarily started out in the services space sometimes have more operations-focused leaders spearheading DevOps while those that started with a software concept more commonly are development focused.   Frequently, DevOps leaders include the CIO, chief architect, di-rector of operations, CIO, or director of software de-velopment.

Quentin Adam: I think the majority of the demand comes from the developers’ side, from a long period of time where infrastructure stagnation and frustration have lead to this rush to push ops to give developers more space on management… Which is sad because DevOps has to be a common and shared approach to help the whole team become more efficient, without overpowering one another. 

The result is often a “hello world”-driven architec-ture, mainly based on deployment agility sacrificing sta-bility, monitoring, uptime and security to the rush. The cultural gap with sustainability is clearly coming from the ops side and is not filled today. I hope the relation-ship will be more balanced in the future.

Hans Boef: For what I see, developers are taking the lead during their daily work. They need to set up a pipeline during the development process. In the various steps in this process, they arrange the right persons to do their jobs.

The focus is slowly shifting from “What is DevOps?” to “Where do we start?”How do we answer the second question? 

Charity Majors: You start by making software engi-neers responsible for their own services. Putting them on the hook for the quality of their own code shortens the feedback loop and aligns their incentives with their us-ers.  Operations is simply ownership and responsibility for outcomes.

No service really needs an operator.  They need owners.  Mike D. Kail: The second tenet of DevOps is “Au-

tomation”, so I would first start with the basic manual tasks that can be automated, measure the productivity gains, and continue along that path. I will also add that ensure that the tasks that you automate are actually needed, meaning that they will serve to increase produc-tivity and efficiency.

John Arundel: Put your developers on call for pro-duction.

Gregory S. Bledsoe: Every organization is a special onion.  The reasons or pathologies that have resulted in the current process don’t suddenly go away because you want to do DevOps. Customarily organizations want to look at examples of what everyone else is doing and copy that.  

This is exactly backwards, what Deming called “man-aging for result” instead of managing the cause. Results are side effects of effectively managing the cause, and

Also visit this Session:

Feedback!Konstantin Diener (cosee GmbH)

Customers often ask me if there are guidelines or tools for an agile trans-formation or for introducing DevOps to an organization. The clear answer is

‚No‘. There is only one exception: Feedback loops are the glue that keeps the evolution of your organization together. I will show you how your organization can be continuously developed by using feedback tools like retrospectives etc.

Page 27: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER DevOps Culture

www.devopsconference.de 27

this requires two things.  A deep understanding of the operating principles of DevOps, and a deep analysis of the causes and misaligned incentives that prevent these solutions from emerging.  

An organization often needs outside help to do this.  It is tough to see your own forest for the trees, and both Deming and Drucker, whose principles form the foun-dations of DevOps, were big believers in introducing someone new into the scenario to diagnose what’s in the soup your swimming in.  This comes down to the power of the invisible bounds of culture.  

Jérôme Petazzoni: I once said, “one way to start your DevOps journey is to get started with containers.” But I have also said, “using Docker (or containers) doesn’t mean that you do DevOps.” I stand by both statements, even if they sound contradictory at first. Using contain-ers (to facilitate onboarding and achieve consistent de-velopment environments, for instance) is a good way to get started. 

From there, you can move on to reproducible builds. Continuous integration is a great next step. From there, you can explore continuous deployment for QA and staging (for instance). And eventually, to production. It’s important to make sure that all teams are on board at each step, and remember that tools like Docker and Kubernetes are just tools, and can be misused.

Thorsten Heller: First thing to start: Mindset. Chang-ing the mindset and understanding both on the de-velopers and the operators’ side to make all of them understand the benefits and “what’s in it for me”. Then organization. And in the end maybe tools.  

Eric Vanderburg: DevOps can begin from the ground up or the top down.   Many successful DevOps stores started as a grassroots initiative. However, at some point, grassroots DevOps initiatives will need to have top man-agement support. Likewise, top-down approaches will need to have the buy-in of those in development and oper-ations for the change to be successful. Leadership provides the funding and direction, but cultural change requires a

much larger percentage of the company to take place.Quentin Adam: There are simple and more technical

questions to answer. If you really want to change the way it works, management has to change its point of view. Lots of companies have different budget and in-centive between the ops team and the dev team. Basi-cally, you have developers that need to ship new code to production as fast as possible to maintain the edge of their company, and on the other side, you have ops that are incentivized on making sure everything is stable, safe, and to reduce production costs. This leads to a con-flicting culture inside the company.

The best way to make them work together (the real idea behind DevOps) is to reunite the teams with only one budget and organization, with in-line goals. It will be a good signal from top management to help teams implement DevOps for real.

From a tech perspective, I think the best way to “go DevOps” is to start a clear inspection of the code base tooling: Is the project buildable in one command? Make-files everywhere? It’s the first question to ask because DevOps is mainly automation of the ops tasks. And the first thing is being able to easily build every source code in the organization. Building solid bases is important, more than setting up a complex distributed workload orchestration system. 

Start by automating things you already do. If you cre-ate lots of MySQL databases, then automate database creation, monitoring and backups; you will only be good at it if you do it often, you need to automate things you really do, not the hype stuff.

From a cultural perspective, developers need to un-derstand some networking and system level stuff; learn about Linux, containers, systemd and many others things to be able to speak with the ops. The Ops need to think about the future and learn how to code.

Hans Boef:  I think we need to share best practices, influence the right people, educate developers/operators etc.

Charity Majors is engi-neer/CEO at Honey-comb.

Mike D. Kail is CTO at Cybric.

John Arundel is the author of several techni-cal books and has worked with hundreds of clients as a consultant.

Gregory S. Bledsoe is a consultant with Accen-ture, writer, speaker and thought leader.

Jérôme Petazzoni is an international speaker. He previously worked at Docker, Inc.

Thorsten Heller is CEO and Co-Founder at Greenbird.

Eric Vanderburg leads the cybersecurity consult-ing division at TCDI.

Quentin Adam is CEO at Clever Cloud.

Hans Boef is a Developer Advocate at IBM.

DevOps Influencers

Page 28: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER DevOps Culture

www.devopsconference.de 28

It‘s time for a shout out to DevOps influencers

Top 20 social influencers in DevOps 2018Who are the most influential DevOps people in the Twittersphere? After analyzing thousands of accounts, we created a list of people that every DevOps enthusiast or pro should be following.

All influential people have something in common: they can spread ideas faster and better than anyone else. We are aware that following these people has a handful of perks, including staying on top of the latest news and trends. Therefore, we decided to concoct a list of Twit-ter accounts all DevOps fans should follow.

The analysis ranks the top accounts in accordance with their social influence, although interestingly enough, not all household names of DevOps evangelists are on the list. Moreover, the list does not in any way rank a per-son’s character, skill set or talent; it reflects her/his im-pact on Twitter from an algorithmic perspective.

If you think we have missed you or any DevOps rock-star you know, drop us a line here. Nevertheless, we are very proud of the list that we came up with.

Congratulations to all influencers who made it into our top 20 list!

MethodologyWe first generated a list of twenty thousand DevOps-

related Twitter accounts (including all accounts that contain the keyword DevOps in their bio or in any of their tweets.) To score the account and rank them ac-cordingly, we analyzed their social authority and reach using two key metrics: MozRank and Klout.

Moz Social Authority Score: Social Authority score is composed of:1. The retweet rate of users’ last few hundred tweets.2. The recency of those tweets.3. A retweet-based model trained on user profile data.

Visit this MOZ blog post for more in-depth infor-mation.

Klout Score: Klout uses more than 400 signals from eight different networks to update the Klout Score daily. It’s mainly based on the ratio of reactions a user gener-ates compared to the amount of content he shares. Read more at the Klout score blog.

For more details about this year’s top DevOps influ-encers, check out the JAX DevOps blog.

Also visit this Keynote:

Observability for emerging infra: what got you here won’t get you thereCharity Majors (honeycomb)

Distributed systems, microservices, con-tainers and schedulers, polyglot persis-tence .. modern infrastructure patterns are fluid and dynamic, chaotic and

transient. So why are we still using LAMP-stack era tools to debug and monitor them? Let’s talk about the industry-wide shifts underway from metrics to events, from monitoring to observability, and from caring about the system as whole to the health of each and every request, and every user’s experience.

Page 29: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER DevOps Culture

www.devopsconference.de 29

TOP 20JAX DEVOPS INFLUENCERS

devops.jaxlondon.com

IN DEVOPS 2018

17. Dan Wahlin@DanWahlin

Developer, architect, technology trainer, author and public speaker

18. Thorsten Heller@ThoHeller

#CEO, Co-#Founder & Chief- #Geek at @greenbirdIT

19. Liz Fong-Jones@lizthegrey

Staff SRE, @googlecloud Customer Reliability

20. Jérôme Petazzoni@jpetazzo

Developer and system administrator#containers

13. Mike D. Kail@mdkail

CTO @Cybric. 25+ years of Tech-nology Executive Leadership

Experience

14. Gene Kim@RealGeneKim

DevOps enthusiast. Coauthor: DevOps Handbook

15. Solomon Hykes@solomonstre

Hacker & entrepreneur. Founder of Docker

16. Quentin Adam@waxzce

CEO @clever_cloud. IT automation and application sustainability

+39. Amitav Bhattacharjee

@bamitavStrategic leader with over 2 decades

‘of experience in the IT industry

+410. Bridget Kromhout

@bridgetkromhoutTechnologist, podcaster, confer-

ence speaker, team #opslife

11. JBD @rakyll

Programmer. Distributed systems observability at Google.

12. Jez Humble@jezhumble

Author, Speaker, CTO at DevOps Research

+4

3. John Arundel@bitfield

Cloud-native devops consultant, writing software for 35 years

1. Martin Fowler@martinfowler

Programmer, Loud Mouth, ThoughtWorker

2. Laurent Miltgen@kubernan

#Geek in the #cloud

4. Eric Vanderburg@evanderburg

Tech Leader, Author, Consultant, and Speaker

+3

5. Gregory S Bledsoe@geek_king

Disruptor-In-Chief. Gentleman Barbarian, Peaceful Nerd Warrior.

6. Simon Wardley@swardley

Corporate cartographer, chaotic evil, destroyer of

undeserved value.

+47. Charity Majors

@mipsytipsyCofounder at @honeycombio. Likes whiskey, rainbows, and

systems engineering

8. Tech Junkie@techjunkiejh

Experienced and passionate software engineer

Page 30: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Security

www.devopsconference.de 30

Laying the DevSecOps foundation

Automating DevOps – The technology missing linkOne of the hottest topics nowadays, DevSecOps also happens to be brand new so in order to put it into practice, we need to understand it better. First off, we need to take one crucial first step: incorporate automated security scanning. Here, Dr. Rao Papolu looks at the basic stages of this process.

By Dr. Rao Papolu

The typical narrative around DevSecOps focuses on people and process, often highlighting the need to change how individuals interact with code development. A shift-left in the introduction of security, to where the issues are most readily identified, is a smart move. But it’s only part of the answer. The technology must be in place to permit the required automation, otherwise, DevSecOps is just a word.This is where the ability to automatically assess code against a security baseline comes into play as a valu-able first step towards true DevSecOps. By 2019, more than 70% of enterprise DevSecOps initiatives will have incorporated automated security vulnerability and con-figuration scanning for open source components and commercial packages, up from less than 10% in 2016, according to Gartner.

The Gartner report details the top strategies for over-coming hurdles to DevOps in regulated situations. Be-yond closer collaboration between DevOps and InfoSec, we find automated testing, automated deployment, au-tomated workflows, and automation of manual steps. The move to automate security scanning is a natural first step that will establish a solid foundation for every new code development.

Organizations need to be able to quickly test their se-curity policies and ensure that they’re meeting regulatory requirements and adopting the right security posture. But this shouldn’t be a one-off test, with something like PCI DSS, 100% compliance at the start isn’t enough, it must

be maintained. Verizon’s  2017 Payment Security Re-port found that PCI DSS compliance jumped from 11% in 2012 to 55% in 2016, but almost half of companies fell out of compliance within nine months. To combat this risk, automated security scanning must be baked into the process permanently. Luckily, that’s perfectly achievable.

Also visit this Workshop:

WebGoat-Workshop: Teaching Application Security 101 Nanne Baars (OWASP)

A good defense against insecure code requires understanding the mechanics behind how attackers exploit simple pro-gramming mistakes. The WebGoat team

will walk through exercises like SQL Injection, XSS, XXE, CSRF, … and demonstrate how these exploits work. We will show you how you can use WebGoat to train your developers to avoid these simple but common pro-gramming mistakes. We also show you how to extend WebGoat to create lessons specific to your environment. Join us to learn the most basic, but common, application security problems. Tired of all the lessons? During the training we will host a small CTF competition which you can take a shot at and compete with each other.

Page 31: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Security

www.devopsconference.de 31

What does automatic security scanning look like?The workflow described below can serve as a template for teams who wish to bring automation into any phase of the CI/CD process. The overall environment consists of a CI/CD system, in this case, Jenkins, an API-driven assessment platform with documentation of the differ-ent GET, POST, and PUT commands; a shell script that communicates with this platform, thus automating se-curity; and for this example, the Docker Hub.

The workflow is as follows:1. The developer initiates the build via the CI/CD plat-

form.2. As part of the build, there is a trigger to call the

script. This is no more difficult than adding an ad-ditional build step.

3. Now moving over to the actual script, the first step is to pull the required Docker image from the reposi-tory, in this case, the Docker Hub (docker.io). The script should be versatile enough to pull an image from any public or private repository. Via an API call, the script pushes the Docker image to the as-sessment platform.

4. The next step is to select the policy framework. Here, we select Docker Image Scanning since we wish to assess the Docker image, but the script is versatile to handle other types of frameworks. For example, the goal might be to automatically assess the PCI security posture of a new virtual instance. In this case, the assessment platform would compare a PCI framework against the newly created server.

5. Finally, the script triggers the actual assessment. To the left of the workflow diagram is a snippet of the API call for this action.

6. The platform assesses the image against the policy framework and then generates a risk score, in this case, ’80.’

7. The script in-cludes logic to compare this score against the threshold that the devel-oper sets.

8. If the score is >= to ’75,’ this image is consid-ered secure and is automatically promoted to the next step. Conversely, if the image score is below ’75,’ the image is considered insecure, and the developer is notified.

Together, the overall logic flow, the API-driven as-sessment platform, and the scripting form the score of the security automation. These can be generalized to support additional security tests and invocation at multiple stages within the CI/CD process.

Automating DevOps, and therefore enabling DevS-ecOps, requires a combination of people, processes, and technology. With automated code assessment to flag security issues, the previously missing component – technology – is no longer a gap in success.

Figure 2: People and process

Figure 3: API Call: Docker image scan and Jenkins build and script technology

Dr. Rao Papolu is President and Chief Executive Of-ficer of Cavirin Systems, Inc., a provider of contin-uous security assessment and remediation for hybrid clouds, containers and data centers. Rao is on the Board of Directors of SRA, Inc., a publicly-traded

company, and an Advisory Board Member to Solix Technologies. He received his Doctorate degree from Indian Institute of Technol-ogy (IIT), Madras. He has published 25 technical papers in vari-ous international journals and was a visiting scientist at the University of Michigan (Ann Arbor) and the Institute of Space and Astronautical Science, Japan.

Page 32: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

32

No more monolith

Microservices: From a monorepo to a microplatformThe sum can be much greater than the parts in microservices. In this article, Stuart Harris explains what monorepos and microplatforms are and why they are essential parts to the future of microservices.

By Stuart Harris

Microservices are very popular right now. For good reason, because it enables  evolutionary architecture. Each service’s bounded context allows it to evolve on its own roadmap. This gives us great domain-based sepa-ration of concerns so we can move very quickly while being more scalable and highly available. When done properly.

Doing this properly; therein lies the challenge! Wir-ing together a bunch of microservices, while maintain-ing version compatibility, in a way that tolerates failure, and scales the right services at the right time, can be very tricky. You could argue that it was easier when we were building monoliths. At least the whole thing was deployed in one go, and it either worked or it didn’t! But crucially, it was tested as one piece.

What we really want are the most beneficial features of both microservices and monoliths. At the same time. Hold this thought.

What’s a monorepo?Typically, each microservice lives in its own separate source code repository. After all, it’s owned by a sin-gle team, so that would make sense. Unfortunately, it

doesn’t exist in isolation of the other microservices in the application. They have to talk to each other, and as each one is evolving independently, we have to be very careful to keep backwards compatibility between them.

Sometimes, however, we want to make breaking changes and we normally deal with this by using Sem-ver  to declare whether we’ve made a non-breaking change or a breaking one. The problem with this is that it relies on a human to determine whether (potentially unknown) consumers will break because of a specific change. Sometimes this is obvious and sometimes it isn’t. Bumping the versions all over the system is also a lot of error-prone, tedious work.

Rich Hickey, one of the greatest thinkers in our indus-try, in probably his best talk to date, Spec-ulation, rants against Semver. Instead, he advocates that we should only ever make non-breaking changes (i.e. provide more and require less). Otherwise it’s a new thing and should be called (and deployed as) something else.

However, we manage it, when we make breaking changes we usually end up running several instances (with different names, or at different versions) of our microservice, until older consumers have had a chance to catch up. It can, and invariably does, get messy quite quickly.

Page 33: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

33

In a monorepo, the source code for all the microser-vices lives together in a single repository. This helps, be-cause developers can now make atomic commits across the whole application. So you could even change both the producer side and the consumer side of an API call, for instance, at the same time. And it could even be a breaking change, if necessary (providing you blue/green deploy both services at the same time).

This means that Semver is no longer needed for our microservices. There is only ever one version of every-thing and that is the commit “SHA”. The commit his-tory becomes the history, and you know that everything at a specified commit is designed (and tested) to work together.

This sounds like we’re starting to get some of the ben-efits of a monolith!

And microplatforms?Earlier this year, British Airways had a global IT fail-ure that grounded all their planes, affecting 75,000 pas-sengers in 170 airports around the world. Apart from feeling the pain of everyone involved, I was also fasci-nated by this incident. How can a single failure have such a huge “blast radius”? Everything failed, from the website to the check-in systems to the passenger mani-fests to the baggage handling systems.

Details emerged that a power supply failure triggered the event (don’t ask about the UPS or backup systems), so you would imagine that it was initially fairly local-ised, and then the ripples turned into a tsunami, and before you know it the whole company was completely paralyzed.

Small changes can have massive changes. A domino can knock over another domino about 1.5x larger than itself. A chain of dominoes of increasing size makes

a kind of mechanical chain reaction that starts with a tiny push and knocks down an impressively large domino.

It should never be possible for any failure to affect more than its own “system”. Additionally, failures should be expected, which is a core tenet when build-ing high reliability systems (e.g. with Erlang). We should focus on reducing the mean-time-to-repair.

As we move from monoliths to microservices, it’s easy to understand how the blast radius can be re-duced dramatically. And how much more quickly we can recover when the jurisdiction is small. But what happens when the platform itself fails? The platform is a globally shared dependency that introduces hidden vulnerability.

So, I think there’s more we can do. What if an applica-tion, and all its microservices, could be deployed to its own small platform? Instead of a large shared platform, where a failure can be catastrophic, how about many smaller platforms? Distributed across multiple data centers and even multiple cloud providers. Let’s start thinking about “microplatforms”.

A microplatform is a fully featured platform that can be easily provisioned and completely managed by a small (two pizza) cross-functional delivery team. By fully featured, we mean that it’s is self-healing, highly available, fault tolerant and auto-scaling, with load balancing, internal networking, service discovery and secrets management. It is small and runs on a laptop, in any cloud provider, or on-premise. It’s cheap and dis-posable, as in quick and easy to create and destroy. It’s provisioned automatically, using immutable infrastruc-ture as code principles.

So, we can have any number of ephemeral, produc-tion-identical platforms that we can use for testing (in-cluding performance and penetration testing), safe in the knowledge that our results are representational of a production environment. Because they are cheap to spin up anywhere, it’s easy to make them resilient through redundancy. Since they are cloud agnostic, they can span multiple cloud providers, reducing exposure and build-ing in the tolerance of failure. Because they are immu-table and can only be changed by changing the source code, no humans are allowed into the servers. And serv-ers without users are inherently much more secure.

I can’t overstate the importance of evolving a code-base to stamp out identical copies of these microplat-forms. Every time a human makes a manual change to running infrastructure the knowledge encapsulated in that change is lost. It’s in their head (at the time) and that’s it. It’s not captured and it can’t contribute to the evolution of the design. When you fully embrace the principles of immutable infrastructure as code and fully automate everything, then each change is recorded in the codebase and can be verified and built upon by oth-ers to continuously evolve a good design. Just like with application code. Or with pipelines as code. Ultimately all the things should be “as code”.

Also visit this Session:

Prometheus for DevsHubert Ströbitzer (smarter ecommerce)

Having Microservices without proper monitoring tools is like driving the free-way without lights at night. One of those proper tools is Prometheus. In this session

I will discuss and show

• the general architecture of Prometheus

• how to run Prometheus and Grafana within Docker

• how to run some exporters and how to scrape them

• how to create a simple Dashboard in Grafana

• how to write custom metrics with a Spring Boot Ap-plication

Page 34: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

34

Building distributed systems that assemble many mi-croservices is hard. Developers need to orchestrate a complete application by coordinating multiple smaller services. Techniques such as service discovery help this massively. Microplatforms help even more. Developers can design, build and test their complete systems on a laptop, which hosts a production-like platform. What gets shipped is code describing a configuration that has been thoroughly tested on an identical platform. Be-cause all the platforms are identical.

The biggest contribution that microplatforms make is to increase the autonomy of the team. Teams design, build, run, fix and evolve their product. To do this, they need engineers to be T-shaped and DevOps capable. Mi-croplatforms bring the hosting of their product into the team. Because they are simple to setup and manage, the team doesn’t need to rely on another team to host their ap-plication. This means they can move faster. Even though there is a small overhead of running the microplatform, it’s significantly less than the overhead of taking a depend-ency on a horizontal, shared platform and the team that manages it. This sharing creates a huge backlog of fea-tures, only a fraction of which a single team actually cares about. The team now has significantly more available time to concentrate on delivering features that add real busi-ness value.

So, what does a Microplatform look like?What does “simple to setup and manage” mean?

Well, the platform manages container instances. It schedules them across the underlying machines, scales them automatically as load increases, and restarts them when they fail. “Cattle” instances (that don’t have state) are easier to manage than pet instances (that do). So, a microplatform only has cattle. No pets allowed. Or if pets are needed then they are converted to cattle by having them access state somewhere else (like a separate data store that is managed and scaled independently; i.e. datastore-as-a-service).

It must be simple. Not just easy (hard becomes easy through familiarity, but complex rarely becomes simple – see this other great talk by Rich Hickey).

The simplest platform tech out there at the mo-ment is Docker in Swarm Mode, with Google Kuber-netes (that carries more cognitive load) following closely behind. Docker has done a great job of distilling all the important features of a platform into something that is simple to use and easy by default.

In fact, Docker in Swarm Mode is so simple that all you need are VMs running the Docker engine. Nothing else. You get all the features listed above, out of the box. For free. You can create a cluster with “docker swarm init”, and join another VM into the cluster with “docker swarm join”. It really couldn’t be any easier. And it’s this ease, that makes it a prime candidate for microplat-forms, and in my opinion it’s the first time we’ve been able to say that it’s easy enough to be managed within a cross-functional team.

Great! So we know a bit about all the three “M”s in the title of this post. What we haven’t done yet is talk about why bringing these concepts together makes so much sense.

The sum is greater than the partsIn my opinion, one of the things that jumps right to the top is Docker Stack. It’s part of the Docker client and it allows you to deploy an orchestration of ser-vices to a Swarm in one go. It uses Docker Compose files, and it effectively does for Docker in Swarm Mode what Docker Compose does for Docker. Version 3 of the Docker Compose file specification allows you to in-clude extra information about how a service should be deployed to a cluster, so that it can be used by Docker Stack. Stuff like how many replicas should there be, where should they be placed, how to do updates, and how to deal with failure. This sounds great, but it’s even better, because when you deploy a whole stack (orchestration), only the services that have changed will be updated (using zero downtime, rolling deploy-ments). The others will be left alone.

Now, if we’re using a monorepo, then we know that versioning between services has gone away and has been replaced by a single version identifier (the Git SHA) across all the services. So when we build our Docker

Also visit this Session:

Microservice Authentication and AuthorizationNic Jackson (HashiCorp)

In this talk we will look at how you can secure your microservices, we will identify the difference between authen-tication and authorization and why

both are required. We will investigate some common patterns for request validation, including HMAC and JWT to avoid the confused deputy problem, and also how you can manage and secure secret information. Finally, we will see how we can leverage tools like the open source HashiCorp Vault as well as features from cloud providers like AWS and GCP, to keep your systems and users secure. Takeaways:

• Using JWT for Authz

• How to implement two factor authentication into your applications

• Securing microservice secrets

• Implementing TLS and MTLS

• Securing database access, don’t be the next Equifax

• Encryption in transit, secure your data

• Building a secure secret access policy

Page 35: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

35

images, we’ll push them to a Docker Registry tagged with this SHA. Note that if a particular service has not changed, then the layers used to make the image will all be in a local cache, and the build will be skipped. The push will also be skipped for all image layers that have not changed, and are therefore already in the registry. Finally, the deploy will be skipped for any services that have not been updated.

What this leaves us with is an idempotent build phase, an idempotent push phase and an idempotent deploy phase – for the whole stack. Which means we can build, push and deploy the whole application at once (as though it were a monolith) and only the bits (image layers) that have changed will be built, pushed and deployed. It’s like React, but for microservice de-ployments.

And we can redeploy any previous SHA from the reg-istry, as a whole orchestration, and only the services that need reverting will be reverted.

Overall this is fast, efficient, and very convenient. It means we can move forward really quickly, using Con-tinuous Deployment, and if we absolutely need to, we can go back to a last known good state in super quick time, reducing our mean-time-to-repair (although, be-cause fixing forward is now fast it’s usually a better option).

Additionally, when we make the services “12-factor” and also use the cluster’s built in DNS based service dis-covery, it makes the application even more like a mono-lith as each service only knows itself in the context of the application, i.e. there is only one configuration, which is used by all environments. This is especially true if you

link to remote APIs via  ambassadors  (network level proxies) that present a well-known (named) swarm-local proxy for remote endpoints.

By deploying whole orchestrations of microservices as though they were one monolithic application to production-identical platforms that we control, we have suddenly empowered our team to continuously deploy value to our customers, and reach the  kai-zen of continuous improvement. At a pace we won’t have seen before. All by just gradually evolving and improving code, just like the team already does for their application.

Now experimentIf you want to play with these ideas, check out

this  GitHub repository. It contains automation and tooling for building and managing Docker in Swarm mode clusters on Mac OS X, in Google Cloud and in AWS. If you feel like it please get involved, raise issues and pull requests, or just use it to get going.

Red Badger is also offering a free meet-up about mi-croplatforms. If you’re in London on October 24th, head on over to the Red Badger HQ for a great chance to catch up and learn more.

Stuart Harris is the co-founder and Chief Scientist at Red Badger. You can find him on Twitter @StuartHarris.

Page 36: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

36

Interview with Kai Tödter

Microservices are more than just a hypeMicroservices may be a trendy architecture style right now, but they’re a flexible, yet solid foundation for software. In this interview, JAX London speaker Kai Tödter describes some of the advantages and disadvantages of microservices. Plus, he explains why cloud computing is a natural support for a microservices architecture.

JAXenter: A lot of people jump on the microservices bandwagon without having a clear purpose in mind. How important is it to ask yourself the question: “Should I use microservices?”

Kai Tödter: Often people use software architecture styles just because they are trendy or hyped. So it is very important to ask this question. There are many reasons for using microservice-based architectures but there are always costs that should never be ignored.

JAXenter: How can microservices  — if used correctly — offer flexibility in deciding how to best utilize a pro-ject’s resources?

Kai Tödter: One of the benefits of a microservice-based architecture is that a single microservice can be totally owned by a small team. So the team members can decide what would be the best technology stack for implement-ing the microservice. The teams are more flexible and can benefit from existing knowledge and available skills.

Kai Tödter is Principal Key Expert for Software Architecture and Technolo-gies at Siemens Building Technolo-gies. He has 20 years of Java experi-

ence and represented Siemens in the Java Commu-nity Process (JCP) and in the Eclipse Foundation. Kai is Committer with some Open Source projects, his current focus is on web technologies, microservices and hypermedia APIs. Follow him on Twitter  @kaitoedter.

Also visit this Session:

Working up the Hierarchy of Service ReliabilityBjörn Rabenstein (SoundCloud Ltd.)

When former Google SRE Mikey Dickerson had to explain to outsiders the Google way of increasing a site’s reliability, he came up with his famous

“Hierarchy of Service Reliability”, modeled on Ma-slow’s even more famous “Hierarchy of Needs”. In Dickerson’s hierarchy, the base is monitoring, and the actual product is at the very top, with a number of layers in between. It is one of the most meaningful illustrations in O’Reilly’s book about “Site Reliability Engineering: How Google Runs Production Systems”. SoundCloud’s journey to a reliable site essentially meant climbing up that hierarchy. However, we had to learn – often the hard way – that we could not just copy Google’s SRE practices directly. We found ourselves applying “SRE in spirit” but adjusting the implementation details to the scale and culture of our organization. Since SoundCloud also follows a very radical school of DevOps, our story contains a good amount of productive cross-pollination between SRE and DevOps.

Page 37: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

www.devopsconference.de

DOSSIER Microservices

37

JAXenter: What is the correct way to use microservices?Kai Tödter: There is no general “correct way”, it al-

ways depends on the functional and non-functional re-quirements of the business context. But the teams have to think carefully how microservices should interact with each other, like using orchestration or choreogra-phy. The services themselves should be self-contained or use resilience patterns when they need data of other microservices.

JAXenter: Can microservices increase complexity?Kai Tödter: Yes. While one single microservice might

reduce complexity for a specific domain or functional-ity, the composition of many microservices increases complexity. A few examples are scalable deployment, communication between microservices, centralized log-ging, monitoring and tracing. And this is just the tip of an iceberg that is often underestimated.

JAXenter: What is the biggest misconception about mi-croservices?

Kai Tödter: I guess one of the biggest misconceptions is that microservices might solve all the existing (archi-tectural) problems. That is definitely not the case, and teams should think carefully not only about the benefits but also about the costs when they want to go with a microservice-based approach.

JAXenter: In your view, what are the 3 golden rules of microservices deployment?

Kai  Tödter:  I wouldn’t nail it down to “3 golden rules”, but there a few characteristics that should ap-ply to all microservices. For example, each microservice should be independently deployable, upgradable, re-placeable and scalable.

JAXenter: What are the best open-source tools for or-chestrating microservices?

Kai Tödter: I guess this question is related to contain-er orchestration rather than microservice interaction patterns like orchestration or choreography. Popular container orchestration tools are Kubernetes, Marathon (for Mesos and DC/OS) or Docker Swarm.

JAXenter: What are the key elements in implementing a microservice-based architecture?

Kai Tödter: There are many elements that character-ize a microservice-based architecture. I think one key element is that microservices are treated like products rather than projects, meaning that the team owns the whole lifecycle of a microservice, including deploy-

ment and operation. A team that owns a microservice should be cross-functional and organized around busi-ness capabilities rather than specific technologies (like UI or databases), all skills needed to implement the whole microservice should be in the team. Data man-agement should be decentralized (e.g. each microser-vice decides his own persistence layer) and technology stacks should be chosen by the team, rather than hav-ing a centralized governance.

JAXenter: SAP President Steve Singh said on an epi-sode of CNBC’s “Mad Money” that cloud computing is yesterday’s news – ‘microservices’ are the future. Do you agree?

Kai Tödter:  I think Steve used the term “microser-vice” in a different context. If you think about “cloud computing” as using remote servers hosted on the Inter-net rather than a local server or a personal computer, then I would agree because microservices are much finer grained on that level.

If you think about cloud computing as using cloud infrastructures, platforms and services, then I would consider the cloud as a great deployment infrastructure for microservices. So, in my point of view, deploying microservices on cloud infrastructures fits very well.

Thank you!

Also visit this Session:

Continuous Integration/Continuous Delivery for Microservices: Rule them allNir Koren (LivePerson)

Microservices development environ-ment becomes more and more popular in cloud-based companies in order to support better CI/CD methodologies.

I would like to show a case study which leads to best practices of how do we manage CI/CD for 200 microservices based both on Docker/Kubernetes and Puppet under production environments and be able to control them all using various of tools, internal devel-opments and technologies.

Page 38: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Cloud Platforms

www.devopsconference.de 38

Will you be part of the multi-cloud (r)evolution?

How to capture the multi-cloud opportunityDigital transformation projects and agile development are pushing companies towards running multiple cloud applications on differ-ent infrastructures. In this article, Dan Lahl, Vice President of Product Marketing at SAP talks about the roadblocks to multi-cloud success and how to capture the multi-cloud opportunity.

By Dan Lahl

Multi-cloud environments are quickly becoming a ne-cessity for effectively managing applications and work-loads in today’s distributed enterprise, driven by best of breed SaaS apps as well as innovation projects leverag-ing the cloud’s inherent agility for application delivery.

This next evolution of cloud computing means lev-eraging multiple cloud technologies, from multiple infrastructures and application vendors, potentially including public and private clouds. It’s a strategy in which companies can store and manage their software in the cloud environments that best fit with their chosen environment and software, such as AWS, OpenStack, Microsoft Azure, Google Cloud Platform, or others, helping companies realize both cost savings and effi-ciencies. This new phase of cloud computing is an im-proved strategy for companies who need options and flexibility when it comes to bringing together applica-tions in such a complex landscape – not just an out-sourced datacenter.

The evolution of cloud computingTo really understand how multi-cloud got here, it makes sense to recognize where it came from. The evolution of cloud computing began as far back as the 1960s and 70s with mainframe computing providing remote access to multiple users with shared access to a single resource (via the MVS operating system), or private access to computing resources (VM/CMS operating system), which was radically sophisticated at the time.

Also visit this Session:

Become a Cloud-Native: Java Development in the Age of the WhaleDr. Roland Huß (Red Hat)

This presentation shows you how this transition from traditional Java applica-tion development to a cloud-native based model with Kubernetes as orchestration

platform can take place without pain. We will learn how and with which tools we can install a local cloud-native development environment easily. In a first step we are going to look at how to migrate Java applica-tions without changes effortlessly to Kubernetes. Howev-er, to profit the most from container abstraction appli-cations architectures need to adapt, too. Microservices are a perfect fit for this new world. We will see how we can take full benefit of advanced platform features.This presentation focuses on live demos with hands-on coding. We start our journey from scratch with a small plain old Java application which we will port to Kubernetes within minutes. Step by step we increase the complexity so that at the end we will have a feeling for how we can bring Java projects to Kubernetes without knowing all the bells and whistles.

Page 39: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Cloud Platforms

www.devopsconference.de 39

Gradually, the industry embraced the concept of vir-tual machines as well as virtualized private network connections in the 90s. From grid computing to SaaS to public cloud and hybrid cloud computing, technology has come a long way from the days of mainframes, al-though interestingly many concepts today are very simi-lar to what was envisioned in the 60s and 70s. One could call this the “nothing new under the sun” syndrome, but the concepts are being implemented so much better now as technology has matured. And the benefit is there is not just one vendor (IBM) playing in this ecosystem to service customers, rather it is literally thousands.

What’s driving multi-cloud?As organizations face digital transformation, they’re faced with an influx of process AND data-intensive applications and services. This movement complicates their infrastructure and makes it difficult to streamline. While hybrid cloud (offloading a process or subset of data from on-premises to the cloud) is one solution – that market is expected to surpass $91.74 billion in net worth by  2021  – organizations are looking for even more flexibility when it comes to their cloud makeup. If a company wants to build an app on Azure and move that app to run seamlessly on AWS, or if they want to integrate that app with data or processes running on Google Cloud Platform, they should now be able to ac-complish this with multi-cloud. The key is abstracting above the hardware layer.

At the same time, digital transformation projects and agile development are pushing companies towards run-ning multiple cloud applications on different infrastruc-tures due to different workloads that necessitate unique requirements. These complexities have IT leaders look-ing for a solution that offers choice and flexibility, so they can avoid vendor hardware infrastructure lock-in and customize it to their specific needs while avoiding the pain of migrating legacy apps to a new platform. Again, abstraction of the app from the hardware is key.

The legacy app roadblockAccording to a recent report from  MIT Technology Review, 62 percent of IT leaders say integrating legacy systems is the biggest roadblock to multi-cloud success. Besides the cost factor, deploying new technology with cloud services also means time and energy spent training employees on how to use it, as well as new deployment models (CI/CD) which takes away from initial produc-tivity. Once they do overcome these initial hurdles, they will reap a variety of benefits, such as enhanced data pri-vacy, improved efficiency and agility, and even stricter data security.

A big part of successfully using the cloud is understand-ing which new technologies to use, and which to hold off on, or not use at all. Before jumping on the latest and

greatest cloud technologies for technology’s sake (the lemming dilemma), organizations should assess their in-frastructure and deploy a balance between on-premises legacy apps and the latest cutting-edge cloud technologies that best meet their needs. And for goodness sake, don’t get locked into an Infrastructure as a Service (IaaS) you can’t abstract above, that defeats multi-cloud portabil-ity – and it means you are forever stuck in one vendor’s infrastructure (outsourced datacenter).

To make the move to multi-cloud easier, there are platforms that bring together the different applications on one user interface, rather than going through differ-ent infrastructure systems. For example, pure Platform as a Service (PaaS) provides customers and partners with capabilities that allow for building and extending per-sonalized, collaborative, mobile-enabled cloud applica-tions above the infrastructure – giving companies the flexibility they want.

These services accelerate digital transformation across businesses by enabling organizations to build the ex-act applications needed more quickly, easily and eco-nomically – without the requirement of maintaining or investing in on-premises or one hyperscale cloud infra-structure. With these types of services, such as PaaS, en-terprises have the freedom to choose their underlying cloud infrastructure provider, as well as the flexibility to co-locate new cloud applications alongside existing investments, while meeting regulatory and compliance requirements.

2018 and beyondIn the coming year, IT leadership must put agile, open and flexible environments in place to enable the rapid app development for digital transformation sweeping through the industry. More and more businesses will adopt a multi-cloud strategy rather than locking them-selves into one hyperscale cloud vendor and their finite set of platform services (see one vendor vs. thousands of vendors in the ecosystem above).

This multi-cloud agility for rapid app delivery will lead to new innovations and business model transforma-tions in 2018, as organizations will have more freedom to create and reinvent than ever before. Now the ques-tion is, will you be part of this multi-cloud (r)evolution?

Dan Lahl, Vice President of Product Marketing at SAP, has been in high tech for over 30 years, with extensive experience in data management, data warehousing and analytics.  While at SAP Dan has led emerging technology initiatives, including Data Integration, Data Grid, In-Memory Database and

Mobile BI.  Dan is currently focused on growing SAP Cloud Plat-form business for SAP.  Dan has degrees from the Haas School of Business at U.C. Berkeley and Trinity Evangelical Divinity School in Chicago.  In his off hours, Dan enjoys paddleboarding, skiing and rooting for bay area sports teams.

Page 40: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Serverless

www.devopsconference.de 40

Interview series with JAX DevOps speakers

The road to server-less maturity: Running away from “NoOps” or toward it?Serverless has grown considerably in the past few years but is it ready to embrace its maturity? And does that mean running away from NoOps or toward it? In the last part of our interview series, we invited six JAX DevOps speakers to weigh in on the serverless movement, its “competition” with container-based cloud infrastructure and the challenges Kubernetes and Docker should be addressing this year. 

Serverless = […]Ops

We’ve already had this conversation but that doesn’t mean we cleared the air. Does serverless mean NoOps? It depends who you ask. Bart Blommaerts, architect at Ordina is opposed to the idea of giving serverless this nickname but Michiel Rook of FourScouts believes that  “serverless is one more step towards ‘NoOps‘” — al-though it should be noted that he dislikes the term.

What do you think?  If you want to hear why Bart thinks serverless does  not  mean NoOps, check out this video interview. If you want to dive deeper into this discussion, keep reading.

Speaking of serverless, it might be young but it has quickly gained a seat at the big boys’ table. Last year, JAXenter included serverless computing in the annual

Also visit this Session:

Live Coding: A Serverless Platform in 100 LinesCyle Riggs (Container Solutions)

I will build and demonstrate a working serverless platform, without using any ex-isting serverless framework, live on stage. In implementing FaaS I will demonstrate

the core components, features, and challenges of archi-tecting and using various serverless platforms.

Page 41: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Serverless

www.devopsconference.de 41

survey  and the results are quite astonishing — this rather young topic grabbeda lot of respondents’ at-tention.

The good news is that serverless is on our radar again so if you want to have a say in this year’s trends (and give serverless your ‘seal of approval’), don’t forget to fill out our annual survey. 

In the first part of this interview series, we asked a few JAX DevOps speakers to share their predictions for 2018 and pinpoint the characteristics that every good DevOps practitioner should have. 

Then we invited them to talk about DevSecOps, com-panies’ move to the cloud and how/if this trend will con-tinue to grow in 2018. In the third part of the interview series, we invited them to weigh in on the microservices hype, Java’s place in all this and what’s happening in the container world.

Now it’s time to talk about the serverless movement.  

How will serverless change in 2018? Michiel Rook:  Serverless is one more step towards “NoOps” (even though I dislike that term). I.e., running on API-driven infrastructure that just works and you don’t have to think about, as a commodity.

Philipp Krenn:  I would expect the tooling and best practices around serverless to mature. It is a very useful tool that will solve some problems and we will learn where it shouldn’t be used.

 Will serverless be seen as a competitor to container-based cloud infrastructure or will they go hand in hand?Daniel Bryant:  I believe you will see serverless tech-nologies running on containers and CIaaS — like Kube-less, OpenFaaS, Oracle’s Fn, and Fission — and so they will be (largely) complementary.

The development styles can be very different though — serverless FaaS is all about events driven architecture and creating stateless services, and containers can satisfy more traditional paradigms.

Tommy Tynjä:  There will be use cases for both technologies. The important thing for an organization leveraging these technologies is to have proper Con-tinuous Delivery pipelines set up to allow the business to focus on what matters, which is the software that adds value to their customers. How the software is run is secondary as long as all the necessary feedback loops such as delivery pipelines, monitoring and fault-toler-ance etc. are in place.

Michiel Rook: I think there’s plenty of room for both technologies.

Philipp Krenn:  I think they will complement each other. While containers will move further away from managing infrastructure, both technologies will have their use-cases.

What challenges should Kubernetes and Docker address in 2018?Daniel Bryant: The key focus of both platforms should be (and is) focusing on the developer experience or de-veloper UX— i.e. minimising the friction between a great idea, coding, testing, deploying and observing in production. 

Many of us (myself included) have gotten very excited about containers, but now that the core container tech-nology is maturing we have to get back to basics — as a developer this is all about focusing on delivery value to our users by coding and deploying new features.

Figure 4: JAXenter technology trends survey 2017: Results

Also visit this Session:

Beyond Cloud: A road trip into AWS and back to bare metalTorsten Köster (shopping24 internet group)

Public cloud services have become a commodity asset in the past years. Shop-ping24 though is currently running all systems on pure bare metal. With this

we’re not alone: Etsy is a popular example for choosing a classic datacenter over a cloud one. In this use case, I’ll lay out Shopping24’s move into the Amazon AWS cloud a few years back and our recent journey back into a classic datacenter. Myths about pricing, noisy neigh-bors and machine sizing will get busted (or approved). The bottom line for doing effective DevOps in both clas-sic and cloud infrastructure is automatization. I’ll explain how this can be achieved with the right toolset (Ansible) and a couple of modern server chassis.

Page 42: WHITEPAPER 2018 - DevOps Conference · 2018-10-31 · WHITEPAPER 2018 40+ pages of knowledge for DevOps Enthusiasts. All about Docker, Kubernetes, Continuous Delivery, DevOps Culture,

DOSSIER Serverless

www.devopsconference.de 42

Daniel Bryant works as an Independent Techni-cal Consultant and is CTO at SpectoLabs.

Tommy Tynjä is a Senior Software Engineer and Continuous Delivery Consultant at Diabol.

Michiel Rook is a Java/PHP/Scala consultant from the Netherlands, working at FourScouts.

Antonio Cobo is an Agile Delivery Consultant for OpenCredo.

Philipp Krenn is part of the infrastructure team and a developer advocate at Elastic.

Alexander Schwartz is Principal IT Consultant at msg systems.

The DevOps Actors

Tommy Tynjä:  Service-mesh frameworks such as Istio are very interesting and addresses cross-cutting concerns you are likely to encounter when running on Kubernetes. Such frameworks will become natural parts of service platforms going forward.

Philipp Krenn: Kubernetes —  Less alpha, more sta-ble. If your CEO knew how much alpha and beta soft-ware is running their core business we’d all get fired ;-).

Docker — Fix or find your business model. Alterna-tively make sure that the core technology isn’t at risk regardless of what will happen to Docker Inc.

Could you offer us some tips & tricks that you discovered last year and decided to stick to?Daniel Bryant: As a shameless plug, all of the tools and techniques that I am interested in will be shared in my upcoming book with O’Reilly “Continuous Delivery in Java”. 

Tommy Tynjä:  When making decisions on which tools or products to use there has to be proper sup-port for automation using APIs, CLIs, infrastructure

as code or such. Surprisingly many tools do not offer this today and many have such support built-in later in non-ideal ways. 

Otherwise you will find yourself spending too much time either figuring out how to automate even the sim-plest thing or ever worse, configure something manually without being able to easily version control and repro-duce it. Automation has to, and should, be easy and the encouraged way of configuring a tool or product.

Philipp Krenn: Since I work for Elastic, the company behind Elasticsearch, Kibana, Beats, and Logstash, I’m mostly focused on the logging, monitoring, and tracing side of things. Most of my tips and tricks are around combining different kinds of events to see the bigger pic-ture, like correlating errors with infrastructure changes or deployments. Only when you combine all kinds of events together you get the full overview.

Alexander Schwartz: In 2016/2017 I picked up Pro-metheus for monitoring and Zipkin for tracing. I don’t want to miss them in my next project.