68
Deploying Red Hat OpenShift Container Platform 3 on Google Cloud Engine Chris Murphy, Peter Schiffer Version 1.7, 2016-11-21

Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Deploying Red Hat OpenShift ContainerPlatform 3 on Google Cloud Engine

Chris Murphy, Peter Schiffer

Version 1.7, 2016-11-21

Page 2: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Table of ContentsComments and Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2

1. Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3

2. Components and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4

2.1. GCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5

2.1.1. Compute Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5

2.1.2. Solid-state persistent disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5

2.1.3. Custom Images and Instance Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6

2.1.4. Load Balancing and Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6

2.1.5. Compute Engine Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7

2.1.6. Regions and Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9

2.2. Cloud Storage and Container Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9

2.3. Cloud DNS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  10

2.3.1. Public Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  10

2.4. Cloud Identity and Access Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11

2.5. Dynamic Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11

2.6. Bastion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12

2.7. Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13

2.7.1. Master nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13

2.7.2. Infrastructure nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13

2.7.3. Application nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13

2.7.4. Node labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13

2.8. OpenShift Pods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  14

2.9. Routing Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  14

2.10. Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15

2.11. Software Version Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15

2.12. Required Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15

2.13. Tooling Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16

2.13.1. gcloud Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16

2.13.2. Ansible Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16

2.13.3. GitHub Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16

3. Provisioning the Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17

3.1. Preparing the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17

3.1.1. Red Hat Subscription Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17

3.1.2. Download RHEL Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  18

3.2. Preparing the Google Cloud Engine Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  18

3.2.1. Sign in to the Google Cloud Platform Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  18

Page 3: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.2. Create a new GCE Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  19

3.2.3. Set up Billing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  20

3.2.4. Add a Cloud DNS Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  21

3.2.5. Create Google OAuth 2.0 Client IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  22

3.2.6. Fill in OAuth Request Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  23

3.2.7. Download and Set Up GCE tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25

3.2.8. Configure gcloud default project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25

3.2.9. List out available gcloud Zones and Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25

3.3. Prepare the Installation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  26

3.4. Running the gcloud.sh script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30

4. Finishing the Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  31

4.1. Validate the Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33

4.1.1. Running the Validation playbook again . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33

4.2. Operational Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  34

4.3. Gathering Host Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  34

4.4. Running Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  34

4.5. Checking the Health of ETCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  39

4.6. Default Node Selector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  40

4.7. Management of Maximum Pod Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  40

4.8. Yum Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42

4.9. Console Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42

4.9.1. Log into GUI console and deploy an application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42

4.9.2. Log into CLI and Deploy an Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42

4.10. Explore the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  45

4.10.1. List Nodes and Set Permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  45

4.10.2. List Router and Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47

4.10.3. Explore the Docker Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  48

4.10.4. Explore Using the oc Client to Work With Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  48

4.10.5. Explore Docker Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  49

4.10.6. Explore Firewall Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52

4.10.7. Explore the Load Balancers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53

4.10.8. Explore the GCE Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53

4.11. Persistent Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53

4.11.1. Creating a Persistent Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  54

4.11.2. Creating a Persistent Volumes Claim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  54

4.12. Testing Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  56

4.12.1. Generate a Master Outage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  56

4.12.2. Observe the Behavior of ETCD with a Failed Master Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  56

Page 4: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.12.3. Generate an Infrastruture Node outage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  57

5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  61

Appendix A: Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  62

Appendix B: Contributors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  63

Page 5: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

100 East Davie StreetRaleigh NC 27601 USAPhone: +1 919 754 3700Phone: 888 733 4281PO Box 13588Research Triangle Park NC 27709 USA

All trademarks referenced herein are the property of their respective owners.

© 2016 by Red Hat, Inc. This material may be distributed only subject to the termsand conditions set forth in the Open Publication License, V1.0 or later (the latestversion is presently available at http://www.opencontent.org/openpub/).

The information contained herein is subject to change without notice. Red Hat, Inc.shall not be liable for technical or editorial errors or omissions contained herein.

Distribution of modified versions of this document is prohibited without theexplicit permission of Red Hat Inc.

Distribution of this work or derivative of this work in any standard (paper) bookform for commercial purposes is prohibited unless prior permission is obtainedfrom Red Hat Inc.

The GPG fingerprint of the [email protected] key is: CA 20 86 86 2B D6 9D FC 65F6 EC C4 21 91 80 CD DB 42 A6 0E

Send feedback to [email protected]

www.redhat.com 1 [email protected]

Page 6: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Comments and FeedbackIn the spirit of open source, we invite anyone to provide feedback and comments on any referencearchitecture. Although we review our papers internally, sometimes issues or typographical errors areencountered. Feedback allows us to not only improve the quality of the papers we produce, but allowsthe reader to provide their thoughts on potential improvements and topic expansion to the papers.Feedback on the papers can be provided by emailing [email protected]. Please refer to the title withinthe email.

[email protected] 2 www.redhat.com

Page 7: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

1. Executive SummaryRed Hat OpenShift Container Platform 3 is built around a core of application containers powered byDocker, with orchestration and management provided by Kubernetes, on a foundation of Atomic hostand Enterprise Linux. OpenShift Origin is the upstream community project that brings it all togetheralong with extensions, to accelerate application development and deployment.

This reference environment provides a comprehensive example demonstrating how OpenShiftContainer Platform 3 can be set up to take advantage of the native high availability capabilities ofKubernetes in order to create a highly available OpenShift Container Platform 3 environment. Theconfiguration consists of three OpenShift masters, two OpenShift infrastructure nodes and twoOpenShift application nodes in a multi zone environment. In addition to the configuration, operationalmanagement tasks are shown to demonstrate functionality.

www.redhat.com 3 [email protected]

Page 8: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2. Components and ConfigurationThis section provides a high-level representation of the components within this reference architecture.This reference architecture describes the deployment of the highly available OpenShift ContainerPlatform 3 environment on Google Cloud Engine (GCE).

The image below provides a high-level representation of the components within this referencearchitecture. Utilizing Google Cloud Engine (GCE), the resources detailed in this document are deployedin a highly available configuration consisting of Regions and Zones, Load Balancing and Scaling, GoogleDNS, Google oAuth, Custom Images, and Solid-State Persistent disks. Instances deployed are givenspecific roles to support OpenShift. The Bastion host limits the external access to internal servers byensuring that all SSH traffic passes through the Bastion host. The master instances host the OpenShiftmaster components such as etcd and the OpenShift API. The Application instances are for users todeploy their containers while the Infrastructure instances are used for the OpenShift router andregistry. Authentication is managed by Google OAuth. Storage for both instances and persistent storageis handled by Solid-State Persistent disks. Networking for access to all OpenShift resources arehandled by three Load Balancers that distribute traffic to the external facing OpenShift API and consoleaccess originating from outside the cluster, internal access to the OpenShift API, and lastly all servicesopened through OpenShift routing. Finally, all resources are registered with Google’s Cloud DNS.

[email protected] 4 www.redhat.com

Page 9: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

This reference architecture breaks down the deployment into separate phases.

• Phase 1: Provision the infrastructure on GCE

• Phase 2: Provision OpenShift Container Platform on GCE

• Phase 3: Post deployment activities

For Phase 1, the infrastructure is provisioned using the scripts provided in the openshift-ansible-contrib Github repo. Once completed, the scripts will automatically start Phase 2 which is thedeployment of OpenShift Enterprise. The Phase 2 is done via the Ansible playbooks installed by theopenshift-ansible-playbooks rpm package. The last phase, Phase 3, concludes the deployment byconfirming the environment was deployed properly. This is done by running tools like oadmdiagnostics and the systems engineering teams validation Ansible playbook.

The scripts provided in the Github repo are not supported by Red Hat. They merelyprovide a mechanism that can be used to build out your own infrastructure.

2.1. GCEThis section will give a brief overview of some of the common Google Cloud Engine products that areused in this reference implementation.

2.1.1. Compute Engine

Per Google; "Google Compute Engine delivers virtual machines running in Google’s innovative datacenters and worldwide fiber network. Compute Engine’s tooling and workflow support enable scalingfrom single instances to global, load-balanced cloud computing."

Within this reference environment, the instances are deployed in multiple Regions and Zones in the us-central-a zone as the default. However, the default region can be changed during the deploymentportion of the reference architecture to a zone closer to the user’s geo-location. The master instancesfor the OpenShift environment are machine type n1-standard-2 and are provided an extra disk whichis used for Docker storage. The node instances are also machine type n1-standard-2 and are providedtwo extra disks which are used for Docker storage and OpenShift ephemeral volumes. The bastion hostis deployed as machine type n1-standard-1. Instance sizing can be changed in the variable file for theinstaller which is covered in a later chapter.

For more on sizing and choosing zones and regions, see https://cloud.google.com/compute.

2.1.2. Solid-state persistent disks

Google provides persistent block storage for your virtual machines as both standard (HDD) and solid-state (SSD) disks. This reference implementation will be utilizing Solid-State Persistent disks as theyprovide better performance for random input/output operations per second.

Google’s persistent storage has the following attributes:

www.redhat.com 5 [email protected]

Page 10: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

• Snapshotting capabilities

• Redundant, industry-standard data protection

• All data written to disk is encrypted on the fly and stored in encrypted form

Within this reference environment, both master and node instances for the OpenShift environmentare provisioned with extra Solid-State Persistent disks. Each master instance is deployed with asingle extra 25GB disk which is used for Docker storage. The node instances are deployed with an extra25GB disk used for Docker storage and a 50GB disk used for OpenShift ephemeral volumes. Instancesizing can be changed in the variable file for the installer which is covered in a later chapter.

For more information see https://cloud.google.com/compute/docs/disks.

2.1.3. Custom Images and Instance Templates

Google provides images of many popular Operating Systems including Red Hat, CentOS, and Fedorawhich can be used to deploy infrastructure. They also provide the ability to upload and create customimages, which has the added appeal of being pre-configured with common packages and servicescommon for every OpenShift master and node instance. This feature speeds up the couple of initialsteps needed for each instance, thus improving the overall deployment time. For this referencearchitecture a image provided by Red Hat will be uploaded and pre-configured for use as the baseimage for all OpenShift Container Platform instances.

This reference architecture will also take advantage of GCE’s instance template feature. This allows forall the required information (such as a machine type, disk image, network, and zone) to be turned intotemplates for each the OpenShift instance types. These custom instance templates are then used tosuccessfully launch both OpenShift v3 masters and nodes.

For more information see https://cloud.google.com/compute/docs/creating-custom-image andhttps://cloud.google.com/compute/docs/instance-templates.

2.1.4. Load Balancing and Scaling

A Load Balancing service enables distribution of traffic across multiple instances in the GCE cloud.There are clear advantages to doing this:

• Keeps your applications available

• Supports Compute Engine Autoscaler which allows users to perform autoscaling on groups ofinstances

GCE provides three kinds of Load Balancing, HTTP(S), Network based and SSL Proxy. To be able todistribute WebSocket traffic as well as HTTP(S), this implementation guide will use Network LoadBalancing and SSL Proxy.

For more information see https://cloud.google.com/compute/docs/load-balancing-and-autoscaling.

GCE LB implements parts of the Routing Layer.

[email protected] 6 www.redhat.com

Page 11: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Load Balancers Details

Three load balancers are used in the reference environment. The table below describes the loadbalancer Cloud DNS name, the instances in which the load balancer is attached, and the port(s)monitored by the health check attached to each load balancer to determine whether an instance is inor out of service.

Table 1. Load Balancers

Load Balancers Assigned Instances Port

openshift-master.gce.sysdeseng.com

ocp-master-* 443

internal-openshift-master.gce.sysdeseng.com

ocp-master-* 443

*.apps.gce.sysdeseng.com ocp-infra-* 80 and 443

Both the internal-openshift-master, and the openshift-master load balancers utilize the OpenShiftMaster API port for communication. The internal-openshift-master load balancer uses the privatesubnets for internal cluster communication with the API in order to be more secure. The openshift-master load balancer is used for external traffic to access the OpenShift environment through the APIor the web interface. The "*.apps" load balancer allows public traffic to reach the infrastructure nodes.The infrastructure nodes run the router pod which then directs traffic directly from the outside worldinto OpenShift pods with external routes defined.

2.1.5. Compute Engine Networking

A GCE Networking allows you to configure custom virtual networking which includes IP addressranges, gateways, route tables and firewalls.

For more information see https://cloud.google.com/compute/docs/networks-and-firewalls.

www.redhat.com 7 [email protected]

Page 12: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Firewall Details

According to the specific needs of OpenShift services, only the following ports are open in the networkfirewall.

Table 2. ICMP and SSH Firewall Rules

Port Protocol Service Source Target

ICMP Internet All nodes

22 TCP SSH Internet Bastion

22 TCP SSH Bastion All internal nodes

Table 3. Master Nodes Firewall Rules

Port Protocol Service Source Target

2379, 2380 TCP etcd Master nodes Master nodes

8053 TCP, UDP SkyDNS All internal nodes Master nodes

443 TCP Web Console, API Internet Master nodes

Table 4. App and Infra Nodes Firewall Rules

Port Protocol Service Source Target

4789 UDP SDN App and InfraNodes

App and InfraNodes

10250 TCP Kubelet Master Nodes App and InfraNodes

5000 TCP Registry App and InfraNodes

Infra Nodes

80, 443 TCP HTTP(S) Internet Infra Nodes

[email protected] 8 www.redhat.com

Page 13: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.1.6. Regions and Zones

GCE has a global infrastructure that covers 4 regions and 13 availability zones. Per Google; "CertainCompute Engine resources live in regions or zones. A region is a specific geographical location whereyou can run your resources. Each region has one or more zones. For example, the us-central1 regiondenotes a region in the Central United States that has zones us-central1-a, us-central1-b, and us-central1-f."

For this reference architecture, we will default to the "us-central1" region. To check which region andzone your user is set to use, the following command will provide the answer:

$ gcloud config configurations listNAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGIONdefault True [email protected] ose-refarch us-central1-a us-central1

In the next chapter, we will go deeper into regions and zones and discuss how to set them using thegcloud tool as well as in the variables file for the infrastructure setup.

For more information see https://cloud.google.com/compute/docs/zones.

2.2. Cloud Storage and Container RegistryGCE provides object cloud storage which can be used by OpenShift Container Platform 3.

OpenShift can build Docker images from your source code, deploy them, and manage their life-cycle.To enable this, OpenShift provides an internal, integrated Docker registry that can be deployed in yourOpenShift environment to manage images.

The registry stores Docker images and metadata. For production environment, you should usepersistent storage for the registry, otherwise any images anyone has built or pushed into the registrywould disappear if the pod were to restart.

Using the installation methods described in this document, the registry is deployed using a GoogleCloud Storage (or GCS) bucket. The GCS bucket allows for multiple pods to be deployed at once for HAbut also use the same persistent backend storage. GCS is object based storage which does not getassigned to nodes in the same way that Persistent Disks are attached and assigned to a node. Thebucket does not mount as block based storage to the node so commands like fdisk or lsblk will notshow information in regards to the GCS bucket. The configuration for the GCS bucket and credentialsto login to the bucket are stored as OpenShift secrets and applied to the pod. The registry can be scaledto many pods and even have multiple instances of the registry running on the same host due to the useof GCS.

For more information see https://cloud.google.com/storage.

www.redhat.com 9 [email protected]

Page 14: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.3. Cloud DNSDNS is an integral part of a successful OpenShift Container Platform deployment/environment. GCE hasthe Google Cloud DNS web service, per Google; "Google Cloud DNS is a scalable, reliable and managedauthoritative Domain Name System (DNS) service running on the same infrastructure as Google. It haslow latency, high availability and is a cost-effective way to make your applications and servicesavailable to your users. Cloud DNS translates requests for domain names like www.google.com into IPaddresses like 74.125.29.101."

OpenShift Container Platform requires a properly configured wildcard DNS zone that resolves to the IPaddress of the OpenShift router. For more information, please refer to the OpenShift ContainerPlatform DNS. In this reference architecture Cloud DNS will manage DNS records for the OpenShiftContainer Platform environment.

For more information see https://cloud.google.com/dns.

2.3.1. Public Zone

The Public Cloud DNS zone requires a domain name either purchased through Google’s "Domains"service or through one of many 3rd party providers. Once the zone is created in Cloud DNS, the nameservers provided by Google will need to be added to the registrar.

For more information see https://domains.google.

[email protected] 10 www.redhat.com

Page 15: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.4. Cloud Identity and Access ManagementGCE provides IAM to securely control access to the GCE services and resources for your users. In thisguide, IAM provides security for our admins creating the cluster.

The deployment of OpenShift requires a user that has the proper permissions by the GCE IAMadministrator. The user must be able to create service accounts, cloud storage, instances, images,templates, Cloud DNS entries, and deploy load balancers and health checks. It is helpful to have deletepermissions in order to be able to redeploy the environment while testing.

For more information see https://cloud.google.com/iam.

2.5. Dynamic InventoryAnsible relies on inventory files and variables to perform playbook runs. As part of this referencearchitecture, Ansible playbooks are provided which scan the inventory automatically using a dynamicinventory script. This script generates an Ansible host file which is kept in memory for the duration ofthe Ansible run. The dynamic inventory script queries the Google Compute API to display informationabout GCE instances. The dynamic inventory script is also referred to as an Ansible Inventory scriptand the GCE specific script is written in python. The script can manually be executed to provideinformation about the environment but for this reference architecture, it is automatically called togenerate the Ansible Inventory. For the OpenShift installation, the python script and the Ansiblemodule add_host are used to allow for instances to be grouped based on their purpose to be used inlater playbooks. The python script groups the instances based on hostname structures of the instancesprovisioned and tags are applied to each instance. The masters are assigned the master tag, theinfrastructure nodes are assigned the infra tag, and the application nodes are assigned the app tag.

For more information see http://docs.ansible.com/ansible/intro_dynamic_inventory.html.

www.redhat.com 11 [email protected]

Page 16: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.6. BastionAs shown in the Bastion Diagram the bastion server in this reference architecture provides a secureway to limit SSH access to the GCE environment. The master and node firewall rules only allow for SSHconnectivity between nodes inside of the local network while the bastion allows SSH access fromeverywhere. The bastion host is the only ingress point for SSH traffic in the cluster from externalentities. When connecting to the OpenShift Container Platform infrastructure, the bastion forwards therequest to the appropriate server. Connecting through the bastion server requires specific SSHconfiguration which is configured by the gcloud.sh installation script.

Figure 1. Bastion Diagram

[email protected] 12 www.redhat.com

Page 17: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.7. NodesNodes are GCE instances that serve a specific purpose for OpenShift. OpenShift masters are alsoconsidered nodes. Nodes deployed on GCE can be vertically scaled before or after the OpenShiftinstallation using the GCE console. All OpenShift specific nodes are assigned an IAM role which allowsfor cloud specific tasks to occur against the environment such as adding persistent volumes orremoving a node from the OpenShift Container Platform cluster automatically. There are three types ofnodes as described below.

2.7.1. Master nodes

The master nodes contain the master components, including the API server, controller manager serverand ETCD. The master maintains the clusters configuration, manages nodes in its OpenShift cluster.The master assigns pods to nodes and synchronizes pod information with service configuration. Themaster is used to define routes, services, and volume claims for pods deployed within the OpenShiftenvironment.

2.7.2. Infrastructure nodes

The infrastructure nodes are used for the router and registry pods. These nodes could be used if theoptional components Kibana and Hawkular metrics are required. The storage for the Docker registrythat is deployed on the infrastructure nodes is backed by a GCS bucket which allows for multiple podsto use the same storage. GCE GCS storage is used because it is synchronized between the availabilityzones, providing data redundancy.

2.7.3. Application nodes

The Application nodes are the instances where non-infrastructure based containers run. Depending onthe application, GCE specific storage can be applied such as Block Storage which can be assigned usinga Persistent Volume Label for application data that needs to persist between container restarts. Aconfiguration parameter is set on the master which ensures that OpenShift Container Platform usercontainers will be placed on the application nodes by default.

2.7.4. Node labels

All OpenShift Container Platform nodes are assigned a label. This allows certain pods to be deployedon specific nodes. For example, nodes labeled infra are Infrastructure nodes. These nodes run therouter and registry pods. Nodes with the label app are nodes used for end user Application pods. Theconfiguration parameter 'defaultNodeSelector: "role=app" in /etc/origin/master/master-config.yamlensures all projects automatically are deployed on Application nodes.

www.redhat.com 13 [email protected]

Page 18: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.8. OpenShift PodsOpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or morecontainers deployed together on one host, and the smallest compute unit that can be defined, deployed,and managed. For example, a pod could be just a single php application connecting to a databaseoutside of the OpenShift environment or a pod could be a php application that has an ephemeraldatabase. OpenShift pods have the ability to be scaled at runtime or at the time of launch using theOpenShift console or the oc CLI tool. Any container running in the environment is considered to be apart of the pod. The pods containing the OpenShift router and registry are required to be deployed inthe OpenShift environment.

2.9. Routing LayerPods inside of an OpenShift cluster are only reachable via their IP addresses on the cluster network. Anedge load balancer can be used to accept traffic from outside networks and proxy the traffic to podsinside the OpenShift cluster.

An OpenShift administrator can deploy routers in an OpenShift cluster. These enable routes created bydevelopers to be used by external clients.

OpenShift routers provide external hostname mapping and load balancing to services over protocolsthat pass distinguishing information directly to the router; the hostname must be present in theprotocol in order for the router to determine where to send it. Routers support the following protocols:

• HTTP

• HTTPS (with SNI)

• WebSockets

• TLS with SNI

The router utilizes the wildcard zone specified during the installation and configuration of OpenShift.This wildcard zone is used by the router to create routes for a service running within the OpenShiftenvironment to a publically accessible URL. The wildcard zone itself is a wildcard entry in Cloud DNSwhich is linked using a CNAME to an Load Balancer which performs a health checks for HighAvailablity and forwards traffic to router pods on port 80 and 443.

[email protected] 14 www.redhat.com

Page 19: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.10. AuthenticationThere are several options when it comes to authentication of users in OpenShift Container Platform.OpenShift can leverage an existing identity provider within an organization such as LDAP or OpenShiftcan use external identity providers like GitHub, Google, and GitLab. The configuration of identificationproviders occurs on the OpenShift master instances. OpenShift allows for multiple identity providersto be specified. The reference architecture document uses Google as the authentication provider butany of the other mechanisms would be an acceptable choice. Roles can be added to user accounts toallow for extra privileges such as the ability to list nodes or assign persistent storage volumes to aproject.

For more information on Google Oauth and other authentication methods see the OpenShiftdocumentation.

2.11. Software Version DetailsThe following tables provide the installed software versions for the different servers that make up theRed Hat OpenShift highly available reference environment.

Table 5. RHEL OSEv3 Details

Software Version

Red Hat Enterprise Linux 7.2 x86_64 kernel-3.10.0-327

Atomic-OpenShift{master/clients/node/sdn-ovs/utils}

3.3.x.x

Docker 1.10.x

Ansible 2.2.0-0.5.prerelease.el7.noarch

2.12. Required ChannelsA subscription to the following channels is required in order to deploy this reference environment’sconfiguration.

Table 6. Required Channels - OSEv3 Master and Node Instances

Channel Repository Name

Red Hat Enterprise Linux 7 Server (RPMs) rhel-7-server-rpms

Red Hat OpenShift Enterprise 3.3 (RPMs) rhel-7-server-ose-3.3-rpms

Red Hat Enterprise Linux 7 Server - Extras (RPMs) rhel-7-server-extras-rpms

www.redhat.com 15 [email protected]

Page 20: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

2.13. Tooling PrerequisitesThis section describes how the environment should be configured to provision the infrastructure, useAnsible to install OpenShift, and perform post installation tasks.

2.13.1. gcloud Setup

The gcloud sdk will need to be installed on the system that will be provisioning the GCE infrastructure.Installation of gcloud utility is interactive, usually you will want to answer positively to the askedquestions.

$ sudo yum -y install curl python which tar qemu-img openssl gettext git$ curl https://sdk.cloud.google.com | bash$ exec -l $SHELL$ gcloud components install beta$ gcloud init

2.13.2. Ansible Setup

The gcloud.sh install script will provision custom image with the following repositories and packagesthat will be used for all instances that make up the OpenShift infrastructure on GCE. This includes theAnsible package that will be used on the bastion host to deploy the OpenShift Container Platform.

$ rpm -q python-2.7$ subscription-manager repos --enable rhel-7-server-rpms$ subscription-manager repos --enable rhel-7-server-extras-rpms$ subscription-manager repos --enable rhel-7-server-ose-3.3-rpms$ yum -y install atomic-openshift-utils \  git \  ansible-2.2.0-0.50.prerelease.el7.noarch \  python-netaddr \  python-httplib2

2.13.3. GitHub Repository

The code in the openshift-ansible-contrib repository referenced below handles the installation ofOpenShift and the accompanying infrastructure. The openshift-ansible-contrib repository is notexplicitly supported by Red Hat but the Reference Architecture team performs testing to ensure thecode operates as defined and is secure.

[email protected] 16 www.redhat.com

Page 21: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3. Provisioning the InfrastructureThis chapter focuses on Phase 1 of the process. The prerequisites defined below are required for asuccessful deployment of infrastructure and the installation of OpenShift.

3.1. Preparing the EnvironmentThe script and playbooks provided within the openshift-ansible-contrib Github repository deploysinfrastructure, installs and configures OpenShift, and performs post installation tasks such as scalingthe router and registry. The playbooks create specific roles, policies, and users required for cloudprovider configuration in OpenShift and management of a newly created GCS bucket to managecontainer images.

Before beginning to provision resources and installing OpenShift Container Platform, there are fewprerequisites that need to be taken care of that will be covered in the following chapters.

3.1.1. Red Hat Subscription Manager

The installation of OCP requires a valid Red Hat subscription. For the installation of OCP on GCE thefollowing items are required:

Table 7. Red Hat Subscription Manager requirements

Requirement Example User, Example Password, andRequired Subscription

Red Hat Subscription Manager User rhsm-se

Red Hat Subscription Manager Password SecretPass

Subscription Name or Pool ID Red Hat OpenShift Container Platform,Standard, 2-Core

The items above are examples and should reflect subscriptions relevant to the account performing theinstallation. There are a few different variants of the OpenShift Subscription Name. It is advised to visithttps://access.redhat.com/management/subscriptions to find the specific Pool ID and SubscriptionName as the values will be used below during the deployment.

www.redhat.com 17 [email protected]

Page 22: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.1.2. Download RHEL Image

Download the latest RHEL 7.x KVM image from the Customer Portal.

Figure 2. RHEL 7.x KVM Image Download

Remember where you download the KVM image to, the path to the '.qcow2' file that gets downloadedwill be used to create our Custom Image

3.2. Preparing the Google Cloud Engine InfrastructurePreparing the Google Cloud Platform Environment.

3.2.1. Sign in to the Google Cloud Platform Console

Documentation for the Google Cloud Engine and the Google Cloud Platform Console can be found onthe following link: https://cloud.google.com/compute/docs. To access the Google Cloud PlatformConsole, a Google account is needed.

[email protected] 18 www.redhat.com

Page 23: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.2. Create a new GCE Project

This reference architecture assumes that a "Greenfield" deployment is being done in a newly createdGCE project. To set that up, log into the GCE console home and select the Project button:

Figure 3. Create New Project

Next, name your new project. The name ocp-refarch was used for this reference architecture, and canbe used for your project also:

Figure 4. Name New Project

www.redhat.com 19 [email protected]

Page 24: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.3. Set up Billing

The next step will be to set up billing for GCE so you are able to create new resources. Go to the Billingtab in the GCE Console and select Enable Billing. The new project can be linked to an existing project orfinancial information can be entered:

Figure 5. Enable Billing

Figure 6. Link to existing account

[email protected] 20 www.redhat.com

Page 25: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.4. Add a Cloud DNS Zone

In this reference implementation guide, the domain sysdeseng.com was purchased through an externalprovider and the subdomain gce.sysdeseng.com will be managed by Cloud DNS. For the example below,the domain gce.sysdeseng.com will represent the Publicly Hosted Domain Name used for theinstallation of OpenShift, however, your Publicly Hosted Domain Name will need to be substituted forthe actual installation. Follow the below instructions to add your Publicly Hosted Domain Name toCloud DNS.

• From the main GCP dashboard, select the Networking section and click Cloud DNS

• Click "Create Zone"

• Enter a name logical represent your Publicly Hosted Domain Name as the Cloud DNS Zone

• the zone name must be separated by "-",

• it is recommended to just replace any "."'s with "-" from your Publicly Hosted DomainName easier recognition of the zone: gce-sysdeseng-com

• Enter your Publicly Hosted Domain Name: gce.sysdeseng.com

• Enter a Description: Public Zone for RH Reference Architecture

• Click Create

Once the Pubic Zone is created, select the gce-sysdeseng-com Domain and copy the Google Name Serversto add to your external registrar if applicable.

www.redhat.com 21 [email protected]

Page 26: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.5. Create Google OAuth 2.0 Client IDs

In order to obtain new OAuth credentials in GCE, the OAuth consent screen must be configured. Browseto the API Manager screen in the GCE console and select Credentials. Select the "OAuth consent screen"tab as shown in the image below:

Figure 7. Google OAuth consent screen

• Chose a email address from the dropdown

• Enter a Product Name

• (Optional, but recommended) Enter in a Homepage URL

• The Homepage URL will be "https://.<your.domain.name>" (This will be the URL used whenaccessing OpenShift )

• Optionally fill out the remaining fields, but they are not necessary for this reference architecture.

• Click Save

[email protected] 22 www.redhat.com

Page 27: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.6. Fill in OAuth Request Form

Clicking save will bring you back to the API Manager → Credentials tab. Next click on the "CreateCredentials" dropdown and select "OAuth Client ID"

Figure 8. Select OAuth Client ID

www.redhat.com 23 [email protected]

Page 28: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

On the the "Create Client ID" page:

Figure 9. New Client ID Page

Obtain both "Client ID" and "Client Secret"

• Click the "Web Application" radio button:

• Enter a Name

• Enter Authorization callback URL

• The Authorization callback URL will be the Homepage URL + /oauth2callback/google (See examplein the New Client ID Page image above)

• Click "Create"

Copy down the Client ID and and Client Secret for use in the provisioning script.

[email protected] 24 www.redhat.com

Page 29: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.2.7. Download and Set Up GCE tools

To manage your Google Compute Engine resources, download and install the gcloud command-linetool.

Installation of gcloud utility is interactive, usually you will want to answer positively to askedquestions.

$ yum install curl python which$ curl https://sdk.cloud.google.com | bash$ exec -l $SHELL$ gcloud components install beta$ gcloud init

Detailed documentation found here: https://cloud.google.com/sdk.

3.2.8. Configure gcloud default project

$ gcloud config set project PROJECT-ID

3.2.9. List out available gcloud Zones and Regions.

To find all regions and zones available to you, run:

$ gcloud compute zones listNAME REGION STATUS NEXT_MAINTENANCE TURNDOWN_DATEasia-east1-a asia-east1 UPasia-east1-b asia-east1 UPasia-east1-c asia-east1 UPeurope-west1-b europe-west1 UPeurope-west1-d europe-west1 UPeurope-west1-c europe-west1 UPus-central1-a us-central1 UPus-central1-f us-central1 UPus-central1-c us-central1 UPus-central1-b us-central1 UPus-east1-b us-east1 UPus-east1-c us-east1 UPus-east1-d us-east1 UP

This reference architecture uses the us-central1-a as the default zone. If you decide to use anotherzone, you can select from any of the zones within a region to be the default, there is no advantage tochoosing one over the other.

www.redhat.com 25 [email protected]

Page 30: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.3. Prepare the Installation ScriptIf the openshift-ansible-contrib GitGub repository has not been cloned as instructed in the [GitHubRepositories] sections from the previous chapters openshift-ansible-contrib/reference-architecture/gce-cli/config.sh.example to openshift-ansible-contrib/reference-architecture/gce-cli/config.sh.example

Once the gcloud utility is configured, deployment of the infrastructure and OpenShift can be started.First, clone the openshift-ansible-contrib repository, switch to the gce-cli directory and copy theconfig.sh.example file to config.sh:

$ git clone https://github.com/openshift/openshift-ansible-contrib.git && cd openshift-ansible-contrib/reference-architecture/gce-cli$ cp config.sh.example config.sh

Next edit the configuration variables in the config.sh file to match those which you have been creatingthroughout this reference architecture. Below are the variables that can be set for the OpenShiftContainer Platform installation on Google Compute Engine. The Table is broken into two sections.Section A contains variables that must be changed in order for the installation to complete. The onlycase where these would not need to be changed would be if the default values were used during the setup sections in previous chapters. Links back to those chapters will be placed where possible. Section Bmostly contains variables that should not be changed which are indicated by the icon. There are asmall number of variables in Section B that can be set based on user preference including node sizingand number of nodes types, however, modifying any of these variables is beyond the scope thisreference architecture and is therefore not recommended.

Table 8. OpenShift config.sh Installation Variables (Section A)

Variable Name Defaults Explanation/Purpose

RHEL_IMAGE_PATH "${HOME}/Downloads/rhel-guest-image-7.3-35.x86_64.qcow2"

RHEL 7.x KVM Image Download

RH_USERNAME '[email protected]' Red Hat Subscription Manager

RH_PASSWORD 'xxx' Red Hat Subscription Manager

RH_POOL_ID 'xxx' Red Hat Subscription Manager

GCLOUD_PROJECT 'project-1' Create New Project

GCLOUD_ZONE 'us-central1-a' List out available gcloud Zonesand Regions.

DNS_DOMAIN 'ocp.example.com' Add a Cloud DNS Zone

DNS_DOMAIN_NAME 'ocp-example-com' Add a Cloud DNS Zone

[email protected] 26 www.redhat.com

Page 31: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Variable Name Defaults Explanation/Purpose

MASTER_DNS_NAME 'master.ocp.example.com' Replace withmaster.$DNS_DOMAIN

INTERNAL_MASTER_DNS_NAME 'internal-master.ocp.example.com'

Replace with internal-master.$DNS_DOMAIN

OCP_APPS_DNS_NAME 'apps.ocp.example.com' Replace withapps.$DNS_DOMAIN

If SSL Certs signed by a CA are available for the Web Console, place them in the provideddefault paths on the local file system. If you don’t have SSL Certs, comment out or delete thelines that define MASTER_HTTPS_CERT_FILE and MASTER_HTTPS_KEY_FILE, and a SSL key anda self-signed certificate will be generated automatically during the provisioning process.

MASTER_HTTPS_CERT_FILE "${HOME}/master.${DNS_DOMAIN}.pem"

MASTER_HTTPS_KEY_FILE "${HOME}/master.${DNS_DOMAIN}.key"

Below contains the YAML snippet that will set up Single-Sign-On as described in Obtain both"Client ID" and "Client Secret". Paste the "Client ID" and "Client Secret" in the appropriateplaces. The "hosted_domain" section is optional, but setting it will limit logins to users with theprovided domain in their email address.

OCP_IDENTITY_PROVIDERS '[ {"name": "google", "kind": "GoogleIdentityProvider", "login":"true", "challenge": "false", "mapping_method": "claim", "client_id":"xxx-yyy.apps.googleusercontent.com", "client_secret": "zzz","hosted_domain": "example.com"} ]'

www.redhat.com 27 [email protected]

Page 32: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Table 9. OpenShift config.sh Installation Variables (Section B)

Variable Name Defaults Explanation/Purpose

OCP_VERSION '3.3' Do Not Modify

CONSOLE_PORT '443' Do Not Modify

OCP_NETWORK 'ocp-network' Name of GCE Network that willbe created for the project

MASTER_MACHINE_TYPE 'n1-standard-2' Minimum Machine Size forMaster Nodes

NODE_MACHINE_TYPE 'n1-standard-2' Minimum Machine Size for BaseNodes

INFRA_NODE_MACHINE_TYPE 'n1-standard-2' Minimum Machine Size forInfrastructure Nodes

BASTION_MACHINE_TYPE 'n1-standard-1' Minimum Machine Size for theBastion Node

MASTER_INSTANCE_TEMPLATE 'master-template' Do Not Modify

NODE_INSTANCE_TEMPLATE 'node-template' Do Not Modify

INFRA_NODE_INSTANCE_TEMPLATE

'infra-node-template' Do Not Modify

BASTION_INSTANCE 'bastion' Do Not Modify

MASTER_INSTANCE_GROUP 'ocp-master' Do Not Modify

MASTER_INSTANCE_GROUP_SIZE

'3' Minimum Number of MasterNodes

MASTER_NAMED_PORT_NAME 'web-console' Do Not Modify

INFRA_NODE_INSTANCE_GROUP 'ocp-infra' Do Not Modify

INFRA_NODE_INSTANCE_GROUP_SIZE

'2' Minimum Number ofInfrastructure Nodes

NODE_INSTANCE_GROUP 'ocp-node' Do Not Modify

NODE_INSTANCE_GROUP_SIZE '2' Minimum Number of BaseNodes

NODE_DOCKER_DISK_SIZE '25' Minimum Size of Docker Disks

NODE_DOCKER_DISK_POSTFIX '-docker' Do Not Modify

NODE_OPENSHIFT_DISK_SIZE '50' Minimum Size of OpenShiftDisks

[email protected] 28 www.redhat.com

Page 33: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Variable Name Defaults Explanation/Purpose

NODE_OPENSHIFT_DISK_POSTFIX

'-openshift' Do Not Modify

MASTER_NETWORK_LB_HEALTH_CHECK

'master-network-lb-health-check' Do Not Modify

MASTER_NETWORK_LB_POOL 'master-network-lb-pool' Do Not Modify

MASTER_NETWORK_LB_IP 'master-network-lb-ip' Do Not Modify

MASTER_NETWORK_LB_RULE 'master-network-lb-rule' Do Not Modify

MASTER_SSL_LB_HEALTH_CHECK

'master-ssl-lb-health-check' Do Not Modify

MASTER_SSL_LB_BACKEND 'master-ssl-lb-backend' Do Not Modify

MASTER_SSL_LB_IP 'master-ssl-lb-ip' Do Not Modify

MASTER_SSL_LB_CERT 'master-ssl-lb-cert' Do Not Modify

MASTER_SSL_LB_TARGET 'master-ssl-lb-target' Do Not Modify

MASTER_SSL_LB_RULE 'master-ssl-lb-rule' Do Not Modify

ROUTER_NETWORK_LB_HEALTH_CHECK

'router-network-lb-health-check' Do Not Modify

ROUTER_NETWORK_LB_POOL 'router-network-lb-pool' Do Not Modify

ROUTER_NETWORK_LB_IP 'router-network-lb-ip' Do Not Modify

ROUTER_NETWORK_LB_RULE 'router-network-lb-rule' Do Not Modify

IMAGE_BUCKET "${GCLOUD_PROJECT}-rhel-guest-raw-image"

Do Not Modify

REGISTRY_BUCKET "${GCLOUD_PROJECT}-openshift-docker-registry"

Do Not Modify

TEMP_INSTANCE 'ocp-rhel-temp' Do Not Modify

GOOGLE_CLOUD_SDK_VERSION '134.0.0' Currently Supported GoogleCloud SDK Version

BASTION_SSH_FW_RULE 'bastion-ssh-to-external-ip' Do Not Modify

www.redhat.com 29 [email protected]

Page 34: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

3.4. Running the gcloud.sh scriptListing 1. OpenShift gcloud.sh Installation Script

$ ./gcloud.sh$MASTER_HTTPS_CERT_FILE or $MASTER_HTTPS_KEY_FILE variable is empty - self-signedcertificate will be generatedConverting gcow2 image to raw image:  (100.00/100%)Creating archive of raw image:disk.rawCreating gs://ocp-on-gce-rhel-guest-raw-image/...Copying file://rhel-guest-image-7.3-35.x86_64.tar.gz [Content-Type=application/x-tar]...| [1 files][439.4 MiB/439.4 MiB] 277.3 KiB/sOperation completed over 1 objects/439.4 MiB.Created [https://www.googleapis.com/compute/v1/projects/ocp-on-gce/global/images/rhel-guest-image-7-2-20160302-0-x86-64].NAME PROJECT FAMILY DEPRECATED STATUSrhel-guest-image-7-2-20160302-0-x86-64 ocp-on-gce READYRemoving gs://ocp-on-gce-rhel-guest-raw-image/rhel-guest-image-7.3-35.x86_64.tar.gz#1478124628607057.../ [1/1 objects] 100% DoneOperation completed over 1 objects..........................................

[email protected] 30 www.redhat.com

Page 35: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4. Finishing the InstallationListing 2. Installation Cont’d From Chapter 3 - OpenShift gcloud.sh Installation Script

...........................

...........................

...........................TASK [validate-etcd : Validate etcd] *******************************************changed: [ocp-master-ln1n]changed: [ocp-master-y3kz]changed: [ocp-master-0d2x]

TASK [validate-etcd : ETCD Cluster is healthy] *********************************ok: [ocp-master-ln1n] => {  "msg": "Cluster is healthy"}ok: [ocp-master-y3kz] => {  "msg": "Cluster is healthy"}ok: [ocp-master-0d2x] => {  "msg": "Cluster is healthy"}

TASK [validate-etcd : ETCD Cluster is NOT healthy] *****************************skipping: [ocp-master-ln1n]skipping: [ocp-master-0d2x]skipping: [ocp-master-y3kz]

PLAY [primary_master] **********************************************************

TASK [validate-app : Gather facts] *********************************************ok: [ocp-master-0d2x]

TASK [validate-app : Create the validation project] ****************************changed: [ocp-master-0d2x]

TASK [validate-app : Create Hello world app] ***********************************changed: [ocp-master-0d2x]

TASK [validate-app : Wait for build to complete] *******************************changed: [ocp-master-0d2x]

TASK [validate-app : Wait for App to be running] *******************************changed: [ocp-master-0d2x]

TASK [validate-app : Sleep to allow for route propegation] *********************Pausing for 10 seconds

www.redhat.com 31 [email protected]

Page 36: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)ok: [ocp-master-0d2x]

TASK [validate-app : check the status of the page] *****************************ok: [ocp-master-0d2x]

TASK [validate-app : Delete the Project] ***************************************changed: [ocp-master-0d2x]

PLAY RECAP *********************************************************************localhost : ok=23 changed=16 unreachable=0 failed=0ocp-infra-b5ni : ok=155 changed=50 unreachable=0 failed=0ocp-infra-hvqv : ok=155 changed=50 unreachable=0 failed=0ocp-master-0d2x : ok=503 changed=163 unreachable=0 failed=0ocp-master-ln1n : ok=335 changed=113 unreachable=0 failed=0ocp-master-y3kz : ok=335 changed=113 unreachable=0 failed=0ocp-node-1pks : ok=148 changed=53 unreachable=0 failed=0ocp-node-o2br : ok=148 changed=53 unreachable=0 failed=0

Connection to 104.199.62.228 closed.

Deployment is complete. OpenShift Console can be found athttps://master.gce.sysdeseng.com

A successful deployment is shown above with the two important columns being"unreachable=0" and "failed=0"

The last line of the installation script will show the address for the configuredOpenshift Console. In this example, the Openshift Console can now be reached athttps://master.gce.sysdeseng.com.

With the deployment successful, the following section demonstrates how to confirm properfunctionality of the Red Hat OpenShift Container Platform.

[email protected] 32 www.redhat.com

Page 37: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.1. Validate the DeploymentThe final part of the deployment script will run the OpenShift Ansible validation playbook. Thepurpose of the playbook run to confirm a successfull installation of the newly deployed OpenShiftenvironment. The playbook run will deploy the "php/cakebook" demo app which will test thefunctionality of the master, nodes, registry, and router. The playbook will test the deployment andclean up any projects and pods created during the validation run.

The playbook will perform the following steps:

Environment Validation

• Validate the public OpenShift Load Balancer "master-https-lb-map" address from the installationsystem

• Validate the public OpenShift Load Balancer "router-network-lb-pool" address from the masternodes

• Validate the internal OpenShift Load Balancer "master-network-lb-pool" address from the masternodes

• Validate the master local master address

• Validate the health of the ETCD cluster to ensure all ETCD nodes are healthy

• Create a project in OpenShift called validate

• Create an OpenShift Application

• Add a route for the Application

• Validate the URL returns a status code of 200 or healthy

• Delete the validation project

4.1.1. Running the Validation playbook again

Ensure the URLs below and the tag variables match the variables used duringdeployment.

$ ssh [email protected]$ cd ~/openshift-ansible-contrib/reference-architecture/gce-ansible$ ansible-playbook -e 'openshift_master_cluster_public_hostname=master.gce.example.com \  openshift_master_cluster_hostname=internal-master.gce.example.com \  wildcard_zone=apps.gce.example.com \  console_port=443' \  playbooks/validation.yaml

www.redhat.com 33 [email protected]

Page 38: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.2. Operational ManagementWith the successful deployment of OpenShift, the following section demonstrates how to confirmproper functionality of the Red Hat OpenShift Container Platform.

4.3. Gathering Host NamesBecause GCE will automatically append a random 4 digit string to instances created from an "InstanceTemplate", it is impossible to know the final hostnames of the instances until after they are created. Toview the final hostnames, browse to the GCE "Compute Engine" → "VM instances" dashboard. Thefollowing image shows an example of a successful installation.

Figure 10. Compute Engine - VM Instances

To help facilitate Operational Management Chapter, the following hostnames based on the above imagewill be used with numbers replacing the random 4 digit strings. The numbers are not meant to suggestordering or importance, they are simply used to make it easier to distinguish the hostsnames.

• ocp-master-01.gce.sysdeseng.com

• ocp-master-02.gce.sysdeseng.com

• ocp-master-03.gce.sysdeseng.com

• ocp-infra-01.gce.sysdeseng.com

• ocp-infra-02.gce.sysdeseng.com

• ocp-node-01.gce.sysdeseng.com

• ocp-node-02.gce.sysdeseng.com

• bastion.gce.sysdeseng.com

4.4. Running DiagnosticsPerform the following steps from the first master node.

[email protected] 34 www.redhat.com

Page 39: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

To run diagnostics, SSH into the first master node (ocp-master-01.gce.sysdeseng.com).

$ ssh [email protected]$ sudo -i

Connectivity to the first master node (ocp-master-01.gce.sysdeseng.com) as the root user should havebeen established. Run the diagnostics that are included as part of the install.

# oadm diagnostics[root@ocp-master-0v02 ~]# oadm diagnostics[Note] Determining if client configuration exists for client/cluster diagnosticsInfo: Successfully read a client config file at '/root/.kube/config'Info: Using context for cluster-admin access: 'validate/internal-master-gce-sysdeseng-com:443/system:admin'[Note] Performing systemd discovery

[Note] Running diagnostic: ConfigContexts[validate/internal-master-gce-sysdeseng-com:443/system:admin]  Description: Validate client config context is complete and has connectivity

Info: The current client config context is 'validate/internal-master-gce-sysdeseng-com:443/system:admin':  The server URL is 'https://internal-master.gce.sysdeseng.com'  The user authentication is 'system:admin/internal-master-gce-sysdeseng-com:443'  The current project is 'validate'  Successfully requested project list; has access to project(s):  [logging management-infra openshift openshift-infra default kube-system]

[Note] Running diagnostic: ConfigContexts[default/master-gce-sysdeseng-com:443/system:admin]  Description: Validate client config context is complete and has connectivity

ERROR: [DCli0006 from diagnosticConfigContexts@openshift/origin/pkg/diagnostics/client/config_contexts.go:285]  For client config context 'default/master-gce-sysdeseng-com:443/system:admin':  The server URL is 'https://master.gce.sysdeseng.com'  The user authentication is 'system:admin/internal-master-gce-sysdeseng-com:443'  The current project is 'default'  (*url.Error) Get https://master.gce.sysdeseng.com/api: x509: certificate signed byunknown authority

  This means that we cannot validate the certificate in use by the  master API server, so we cannot securely communicate with it.  Connections could be intercepted and your credentials stolen.

  Since the server certificate we see when connecting is not validated

www.redhat.com 35 [email protected]

Page 40: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

  by public certificate authorities (CAs), you probably need to specify a  certificate from a private CA to validate the connection.

  Your config may be specifying the wrong CA cert, or none, or there  could actually be a man-in-the-middle attempting to intercept your  connection. If you are unconcerned about any of this, you can add the  --insecure-skip-tls-verify flag to bypass secure (TLS) verification,  but this is risky and should not be necessary.  Connections could be intercepted and your credentials stolen.

[Note] Running diagnostic: DiagnosticPod  Description: Create a pod to run diagnostics from the application standpoint

[Note] Running diagnostic: ClusterRegistry  Description: Check that there is a working Docker registry

Info: The "docker-registry" service has multiple associated pods each mounted with  ephemeral storage, but also has a custom config /etc/registryconfig/config.yml  mounted; assuming storage config is as desired.

WARN: [DClu1012 from diagnosticClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:300]  The pod logs for the "docker-registry-4-0x2le" pod belonging to  the "docker-registry" service indicated unknown errors.  This could result in problems with builds or deployments.  Please examine the log entries to determine if there might be  any related problems:

  time="2016-10-11T17:12:51-04:00" level=error msg="obsolete configuration detected,please add openshift registry middleware into registry config file"  time="2016-10-11T17:12:51-04:00" level=error msg="obsolete configuration detected,please add openshift storage middleware into registry config file"  time="2016-10-11T17:14:39.063687325-04:00" level=error msg="error authorizingcontext: authorization header required" go.version=go1.6.2http.request.host="172.30.40.230:5000" http.request.id=4e5031f9-c62b-406c-9c2f-e6083e349218 http.request.method=GET http.request.remoteaddr="172.16.1.1:36832"http.request.uri="/v2/" http.request.useragent="docker/1.10.3 go/go1.6.2 git-commit/5206701-unsupported kernel/3.10.0-327.36.2.el7.x86_64 os/linux arch/amd64"instance.id=1ac16b7a-874c-419f-a402-949f37e7aacb  time="2016-10-11T17:14:54.845088078-04:00" level=error msg="error authorizingcontext: authorization header required" go.version=go1.6.2http.request.host="172.30.40.230:5000" http.request.id=fe7753f3-cc62-415f-a9e6-d3b566f71adc http.request.method=GET http.request.remoteaddr="172.16.1.1:36876"http.request.uri="/v2/" http.request.useragent="docker/1.10.3 go/go1.6.2 git-commit/5206701-unsupported kernel/3.10.0-327.36.2.el7.x86_64 os/linux arch/amd64"instance.id=1ac16b7a-874c-419f-a402-949f37e7aacb

WARN: [DClu1012 from diagnostic

[email protected] 36 www.redhat.com

Page 41: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:300]  The pod logs for the "docker-registry-4-pa319" pod belonging to  the "docker-registry" service indicated unknown errors.  This could result in problems with builds or deployments.  Please examine the log entries to determine if there might be  any related problems:

  time="2016-10-11T17:12:59-04:00" level=error msg="obsolete configuration detected,please add openshift registry middleware into registry config file"  time="2016-10-11T17:12:59-04:00" level=error msg="obsolete configuration detected,please add openshift storage middleware into registry config file"

[Note] Running diagnostic: ClusterRoleBindings  Description: Check that the default ClusterRoleBindings are present and containthe expected subjects

Info: clusterrolebinding/cluster-readers has more subjects than expected.

  Use the oadm policy reconcile-cluster-role-bindings command to update the rolebinding to remove extra subjects.

Info: clusterrolebinding/cluster-readers has extra subject {ServiceAccount management-infra management-admin }.

[Note] Running diagnostic: ClusterRoles  Description: Check that the default ClusterRoles are present and contain theexpected permissions

[Note] Running diagnostic: ClusterRouterName  Description: Check there is a working router

[Note] Running diagnostic: MasterNode  Description: Check if master is also running node (for Open vSwitch)

WARN: [DClu3004 from diagnosticMasterNode@openshift/origin/pkg/diagnostics/cluster/master_node.go:175]  Unable to find a node matching the cluster server IP.  This may indicate the master is not also running a node, and is unable  to proxy to pods over the Open vSwitch SDN.

[Note] Skipping diagnostic: MetricsApiProxy  Description: Check the integrated heapster metrics can be reached via the APIproxy  Because: The heapster service does not exist in the openshift-infra project atthis time,  so it is not available for the Horizontal Pod Autoscaler to use as a source ofmetrics.

www.redhat.com 37 [email protected]

Page 42: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

[Note] Running diagnostic: NodeDefinitions  Description: Check node records on master

WARN: [DClu0003 from diagnosticNodeDefinition@openshift/origin/pkg/diagnostics/cluster/node_definitions.go:112]  Node ocp-master-0v02.c.ose-refarch.internal is ready but is marked Unschedulable.  This is usually set manually for administrative reasons.  An administrator can mark the node schedulable with:  oadm manage-node ocp-master-0v02.c.ose-refarch.internal --schedulable=true

  While in this state, pods should not be scheduled to deploy on the node.  Existing pods will continue to run until completed or evacuated (see  other options for 'oadm manage-node').

WARN: [DClu0003 from diagnosticNodeDefinition@openshift/origin/pkg/diagnostics/cluster/node_definitions.go:112]  Node ocp-master-2nue.c.ose-refarch.internal is ready but is marked Unschedulable.  This is usually set manually for administrative reasons.  An administrator can mark the node schedulable with:  oadm manage-node ocp-master-2nue.c.ose-refarch.internal --schedulable=true

  While in this state, pods should not be scheduled to deploy on the node.  Existing pods will continue to run until completed or evacuated (see  other options for 'oadm manage-node').

WARN: [DClu0003 from diagnosticNodeDefinition@openshift/origin/pkg/diagnostics/cluster/node_definitions.go:112]  Node ocp-master-q0jy.c.ose-refarch.internal is ready but is marked Unschedulable.  This is usually set manually for administrative reasons.  An administrator can mark the node schedulable with:  oadm manage-node ocp-master-q0jy.c.ose-refarch.internal --schedulable=true

  While in this state, pods should not be scheduled to deploy on the node.  Existing pods will continue to run until completed or evacuated (see  other options for 'oadm manage-node').

[Note] Running diagnostic: ServiceExternalIPs  Description: Check for existing services with ExternalIPs that are disallowed bymaster config

[Note] Running diagnostic: AnalyzeLogs  Description: Check for recent problems in systemd service logs

Info: Checking journalctl logs for 'atomic-openshift-node' serviceInfo: Checking journalctl logs for 'docker' service

[Note] Running diagnostic: MasterConfigCheck  Description: Check the master config file

[email protected] 38 www.redhat.com

Page 43: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

WARN: [DH0005 from diagnosticMasterConfigCheck@openshift/origin/pkg/diagnostics/host/check_master_config.go:52]  Validation of master config file '/etc/origin/master/master-config.yaml' warned:  assetConfig.loggingPublicURL: Invalid value: "": required to view aggregatedcontainer logs in the console  assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metricsin the console

[Note] Running diagnostic: NodeConfigCheck  Description: Check the node config file

Info: Found a node config file: /etc/origin/node/node-config.yaml

[Note] Running diagnostic: UnitStatus  Description: Check status for related systemd units

[Note] Summary of diagnostics execution (version v3.3.0.34):[Note] Warnings seen: 7[Note] Errors seen: 1

The warnings will not cause issues in the environment

If using a self signed SSL certificate, 1 Error noting this will be seen, but it will notaffect the performance of the application

Based on the results of the diagnostics, actions can be taken to alleviate any issues.

4.5. Checking the Health of ETCDThis section focuses on the ETCD cluster. It describes the different commands to ensure the cluster ishealthy. The internal Cloud DNS names of the nodes running ETCD must be used.

SSH into the first master node (ocp-master-01.gce.sysdeseng.com). Using the output of the commandhostname issue the etcdctl command to confirm that the cluster is healthy.

$ ssh [email protected]$ sudo -i

www.redhat.com 39 [email protected]

Page 44: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

$ ETCD_ENDPOINT=$(hostname -f) && etcdctl -C https://$ETCD_ENDPOINT:2379 --ca-file/etc/etcd/ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key cluster-healthmember 3f67344b00548177 is healthy: got healthy result from https://10.132.0.9:2379member cc889bf04a07ac7c is healthy: got healthy result from https://10.132.0.7:2379member cf713f874e4e4f31 is healthy: got healthy result from https://10.132.0.8:2379

In this configuration the ETCD services are distributed among the OpenShift masternodes.

4.6. Default Node SelectorAs explained in section 2.12.4 node labels are an important part of the OpenShift environment. Bydefault of the reference architecture installation, the default node selector is set to "role=apps" in/etc/origin/master/master-config.yaml on all of the master nodes. This configuration parameter is setby the Ansible role openshift-default-selector on all masters and the master API service is restartedthat is required when making any changes to the master configuration.

SSH into the first master node (ocp-master-01.gce.sysdeseng.com) to verify the defaultNodeSelector isdefined.

$ vi /etc/origin/master/master-config.yaml...omitted...projectConfig:  defaultNodeSelector: "role=app"  projectRequestMessage: ""  projectRequestTemplate: ""...omitted...

If making any changes to the master configuration then the master API service mustbe restarted or the configuration change will not take place. Any changes and thesubsequent restart must be done on all masters.

4.7. Management of Maximum Pod SizeQuotas are set on ephemeral volumes within pods to prohibit a pod from becoming to large andimpacting the node. There are three places where sizing restrictions should be set. When persistentvolume claims are not set a pod has the ability to grow as large as the underlying filesystem will allow.The required modifcations are set by Ansible. The roles below will be the specific Ansible role thatdefines the parameters along with the locations on the nodes in which the parameters are set.

Openshift Volume Quota

[email protected] 40 www.redhat.com

Page 45: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

At launch time user-data creates a xfs partition on the /dev/sdc block device, adds an entry in fstab,and mounts the volume with the option of gquota. If gquota is not set the OpenShift node will not beable to start with the "perFSGroup" parameter defined below. This disk and configuration is done onthe infrastructure and application nodes. The configuration is not done on the masters due to themaster nodes being unschedulable.

SSH into the first infrastructure node (ocp-infra-01.gce.sysdeseng.com) to verify the entry exists withinfstab.

# vi /etc/fstab/dev/xvdc /var/lib/origin/openshift.local.volumes xfs gquota 0 0

Docker Storage Setup

The docker-storage-setup file is created at luanch time by user-data. This file tells the Docker service touse /dev/sdb and create the volume group of docker-vol. The extra Docker storage options ensures thata container can grow no larger than 3G. Docker storage setup is performed on all master,infrastructure, and application nodes.

SSH into the first infrastructure node (ocp-infra-01.gce.sysdeseng.com) to verify /etc/sysconfig/docker-storage-setup matches the information below.

$ vi /etc/sysconfig/docker-storage-setupDEVS=/dev/sdbVG=docker-volDATA_SIZE=95%VGEXTRA_DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=3G"

OpenShift Emptydir Quota

The role openshift-emptydir-quota sets a parameter within the node configuration. The perFSGroupsetting restricts the ephemeral emptyDir volume from growing larger than 512Mi. This empty dirquota is done on the infrastructure and application nodes. The configuration is not done on themasters due to the master nodes being unschedulable.

SSH into the first infrastructure node (ocp-infra-01.gce.sysdeseng.com) to verify /etc/origin/node/node-config.yml matches the information below.

$ vi /etc/origin/node/node-config.yml...omitted...volumeConfig:  localQuota:  perFSGroup: 512Mi

www.redhat.com 41 [email protected]

Page 46: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.8. Yum RepositoriesIn section 2.3 Required Channels the specific repositories for a successful OpenShift installation weredefined. All systems except for the bastion host should have the same subscriptions. To verifysubscriptions match those defined in Required Channels perfom the following. The repositories beloware enabled during the rhsm-repos playbook during the installation. The installation will beunsuccessful if the repositories are missing from the system.

$ yum repolistrepo id repo name statusgoogle-cloud-compute Google Cloud Compute 4rhel-7-server-extras-rpms/x86_64 Red Hat Enterprise Linux 7 Server - Extras (RPMs)319rhel-7-server-ose-3.3-rpms/x86_64 Red Hat OpenShift Container Platform 3.3 (RPMs)590rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs)13,352repolist: 14,265

4.9. Console AccessThis section will cover logging into the OpenShift Container Platform management console via the GUIand the CLI. After logging in via one of these methods applications can then be deployed and managed.

4.9.1. Log into GUI console and deploy an application

Perform the following steps from the local workstation.

Open a browser and access https://master.gce.sysdeseng.com/console. When logging into the OpenShiftweb interface the first time the page will redirect and prompt for Google OAuth credentials. Log intoGoogle using an account that is a member of the account specified during the install. Next, Google willprompt to grant access to authorize the login. If Google access is not granted the account will not beable to login to the OpenShift web console.

To deploy an application, click on the New Project button. Provide a Name and click Create. Next, deploythe jenkins-ephemeral instant app by clicking the corresponding box. Accept the defaults and clickCreate. Instructions along with a URL will be provided for how to access the application on the nextscreen. Click Continue to Overview and bring up the management page for the application. Click on thelink provided and access the application to confirm functionality.

4.9.2. Log into CLI and Deploy an Application

Perform the following steps from your local workstation.

[email protected] 42 www.redhat.com

Page 47: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Install the oc client by visiting the public URL of the OpenShift deployment. For example,https://master.gce.sysdeseng.com/console/command-line and click latest release. When directed tohttps://access.redhat.com, login with the valid Red Hat customer credentials and download the clientrelevant to the current workstation. Follow the instructions located on the production documentationsite for getting started with the cli.

A token is required to login using GitHub OAuth and OpenShift. The token is presented on thehttps://master.gce.sysdeseng.com/console/command-line page. Click the click to show token hyperlinkand perform the following on the workstation in which the oc client was installed.

$ oc login https://master.gce.sysdeseng.com --token=fEAjn7LnZE6v5SOocCSRVmUWGBNIIEKbjD9h-Fv7p09

www.redhat.com 43 [email protected]

Page 48: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

After the oc client is configured, create a new project and deploy an application.

$ oc new-project test-app

$ oc new-app https://github.com/openshift/cakephp-ex.git --name=php--> Found image 2997627 (7 days old) in image stream "php" in project "openshift" undertag "5.6" for "php"

  Apache 2.4 with PHP 5.6  -----------------------  Platform for building and running PHP 5.6 applications

  Tags: builder, php, php56, rh-php56

  * The source repository appears to match: php  * A source build using source code from https://github.com/openshift/cakephp-ex.gitwill be created  * The resulting image will be pushed to image stream "php:latest"  * This image will be deployed in deployment config "php"  * Port 8080/tcp will be load balanced by service "php"  * Other containers can access this service through the hostname "php"

--> Creating resources with label app=php ...  imagestream "php" created  buildconfig "php" created  deploymentconfig "php" created  service "php" created--> Success  Build scheduled, use 'oc logs -f bc/php' to track its progress.  Run 'oc status' to view your app.

$ oc expose service phproute "php" exposed

[email protected] 44 www.redhat.com

Page 49: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Display the status of the application.

$ oc statusIn project test-app on server https://master.gce.sysdeseng.com

http://test-app.apps.gce.sysdeseng.com to pod port 8080-tcp (svc/php)  dc/php deploys istag/php:latest <- bc/php builds https://github.com/openshift/cakephp-ex.git with openshift/php:5.6  deployment #1 deployed about a minute ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

Access the application by accessing the URL provided by oc status. The CakePHP application should bevisible now.

4.10. Explore the Environment

4.10.1. List Nodes and Set Permissions

If you try to run the following command, it should fail.

# oc get nodes --show-labelsError from server: User "[email protected]" cannot list all nodes in the cluster

The reason it is failing is because the permissions for that user are incorrect. Get the username andconfigure the permissions.

$ oc whoami

Once the username has been established, log back into a master node and enable the appropriatepermissions for your user. Perform the following step from the first master (ocp-master-01.gce.sysdeseng.com).

# oadm policy add-cluster-role-to-user cluster-admin [email protected]

www.redhat.com 45 [email protected]

Page 50: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Attempt to list the nodes again and show the labels.

# oc get nodes --show-labelsNAME STATUS AGE LABELSocp-infra-sj2p.c.ose-refarch.internal Ready 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-infra-sj2p.c.ose-refarch.internal,role=infraocp-infra-tuu9.c.ose-refarch.internal Ready 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-infra-tuu9.c.ose-refarch.internal,role=infraocp-master-0v02.c.ose-refarch.internal Ready,SchedulingDisabled 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-master-0v02.c.ose-refarch.internal,role=masterocp-master-2nue.c.ose-refarch.internal Ready,SchedulingDisabled 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-master-2nue.c.ose-refarch.internal,role=masterocp-master-q0jy.c.ose-refarch.internal Ready,SchedulingDisabled 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-master-q0jy.c.ose-refarch.internal,role=masterocp-node-1rug.c.ose-refarch.internal Ready 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-node-1rug.c.ose-refarch.internal,role=appocp-node-9enz.c.ose-refarch.internal Ready 9hbeta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west1,failure-domain.beta.kubernetes.io/zone=europe-west1-b,kubernetes.io/hostname=ocp-node-9enz.c.ose-refarch.internal,role=app

[email protected] 46 www.redhat.com

Page 51: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.10.2. List Router and Registry

List the router and registry by changing to the default project.

Perform the following steps from your the workstation.

# oc project default# oc get allNAME REVISION DESIRED CURRENT TRIGGERED BYdc/docker-registry 4 2 2 configdc/router 1 2 2 configNAME DESIRED CURRENT AGErc/docker-registry-1 0 0 9hrc/docker-registry-2 0 0 9hrc/docker-registry-3 0 0 9hrc/docker-registry-4 2 2 8hrc/router-1 2 2 9hNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEsvc/docker-registry 172.30.40.230 <none> 5000/TCP 9hsvc/kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 10hsvc/router 172.30.225.162 <none> 80/TCP,443/TCP,1936/TCP 9hNAME READY STATUS RESTARTS AGEpo/docker-registry-4-0x2le 1/1 Running 0 8hpo/docker-registry-4-pa319 1/1 Running 0 8hpo/router-1-gmb8e 1/1 Running 0 9hpo/router-1-gzeew 1/1 Running 0 9h

Observe the output of oc get all

www.redhat.com 47 [email protected]

Page 52: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.10.3. Explore the Docker Registry

The OpenShift Ansible playbooks configure two infrastructure nodes that have two registries running.In order to understand the configuration and mapping process of the registry pods, the command 'ocdescribe' is used. oc describe details how registries are configured and mapped to the Google GCS forstorage. Using oc describe should help explain how HA works in this environment.

Perform the following steps from your the workstation.

$ oc describe svc/docker-registryName: docker-registryNamespace: defaultLabels: docker-registry=defaultSelector: docker-registry=defaultType: ClusterIPIP: 172.30.40.230Port: 5000-tcp 5000/TCPEndpoints: 172.16.0.4:5000,172.16.2.3:5000Session Affinity: ClientIPNo events.

Notice that the registry has two endpoints listed. Each of those endpoints represents a Docker container.The ClusterIP listed is the actual ingress point for the registries.

4.10.4. Explore Using the oc Client to Work With Docker

The oc client allows similar functionality to the docker command. To find out more information aboutthe registry storage perform the following.

# oc get pods NAME READY STATUS RESTARTS AGE docker-registry-2-8b7c6 1/1 Running 0 2h docker-registry-2-drhgz 1/1 Running 0 2h

[email protected] 48 www.redhat.com

Page 53: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

# oc exec docker-registry-2-8b7c6 cat /etc/registryconfig/config.ymlversion: 0.1log:  level: debughttp:  addr: :5000storage:  cache:  layerinfo: inmemory  gcs:  bucket: "ose-refarch-openshift-docker-registry"  rootdirectory: /registryauth:  openshift:  realm: openshiftmiddleware:  repository:  - name: openshift

Observe the GCE stanza. Confirm the bucket name is listed, and access the GCE console. Click on the"Storage" tab and locate the bucket. The bucket should contain the "/registry" content as seen in theimage below.

Figure 11. Registry Bucket in GCS

Confirm that the same bucket is mounted to the other registry via the same steps.

4.10.5. Explore Docker Storage

This section will explore the Docker storage on an infrastructure node.

The example below can be performed on any node but for this example the infrastructure node(ocp-infra-01.gce.sysdeseng.com) is used.

The output below verifies docker storage is not using a loop back device.

www.redhat.com 49 [email protected]

Page 54: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

$ docker infoContainers: 4 Running: 4 Paused: 0 Stopped: 0Images: 4Server Version: 1.10.3Storage Driver: devicemapper Pool Name: docker--vol-docker--pool Pool Blocksize: 524.3 kB Base Device Size: 3.221 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 1.303 GB Data Space Total: 25.5 GB Data Space Available: 24.19 GB Metadata Space Used: 372.7 kB Metadata Space Total: 29.36 MB Metadata Space Available: 28.99 MB Udev Sync Supported: true Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2016-06-09)Execution Driver: native-0.2Logging Driver: json-filePlugins: Volume: local Network: host bridge null Authorization: rhel-push-pluginKernel Version: 3.10.0-327.36.2.el7.x86_64Operating System: Employee SKUOSType: linuxArchitecture: x86_64Number of Docker Hooks: 2CPUs: 2Total Memory: 7.148 GiBName: ocp-infra-tuu9ID: WQG7:MTQD:IF3G:GBOA:5XRO:552Q:EF6J:7IE6:AW6O:3R7J:5AZM:OVSOWARNING: bridge-nf-call-iptables is disabledWARNING: bridge-nf-call-ip6tables is disabledRegistries: registry.access.redhat.com (secure), docker.io (secure)

Verify 3 disks are attached to the instance. The disk /dev/sda is used for the OS, /dev/sdb is used fordocker storage, and /dev/sdc is used for emptyDir storage for containers that do not use a persistent

[email protected] 50 www.redhat.com

Page 55: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

volume.

$ fdisk -lDisk /dev/sda: 26.8 GB, 26843545600 bytes, 52428800 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x0000b3fd

  Device Boot Start End Blocks Id System/dev/sda1 * 2048 52428305 26213129 83 Linux

Disk /dev/sdb: 26.8 GB, 26843545600 bytes, 52428800 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk label type: dosDisk identifier: 0x00000000

  Device Boot Start End Blocks Id System/dev/sdb1 2048 52428799 26213376 8e Linux LVM

Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/docker--vol-docker--pool_tmeta: 29 MB, 29360128 bytes, 57344 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/docker--vol-docker--pool_tdata: 25.5 GB, 25497174016 bytes, 49799168sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/docker--vol-docker--pool: 25.5 GB, 25497174016 bytes, 49799168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 262144 bytes / 524288 bytes

www.redhat.com 51 [email protected]

Page 56: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Disk /dev/mapper/docker-8:1-25244720-f9a43128f37b6c5efe261755269b2d4f4d7356ffc7ba8cce2eeaaee24bbb2766: 3221 MB, 3221225472bytes, 6291456 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 262144 bytes / 524288 bytes

Disk /dev/mapper/docker-8:1-25244720-9528140ae9917d125197b318b3be45453077732861fc556861bac90269bdc991: 3221 MB, 3221225472bytes, 6291456 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 262144 bytes / 524288 bytes

Disk /dev/mapper/docker-8:1-25244720-a98b399f5c1dcc5bde86c9cd1cf4ce0439d97d2fb806522766905e65fcbab74a: 3221 MB, 3221225472bytes, 6291456 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 262144 bytes / 524288 bytes

Disk /dev/mapper/docker-8:1-25244720-a58e7bde8f30799bfa6e191a66470258a1f98214707505a9e98fa5c356776b71: 3221 MB, 3221225472bytes, 6291456 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 262144 bytes / 524288 bytes

4.10.6. Explore Firewall Rules

As mentioned earlier in the document several security groups have been created. The purpose of thissection is to encourage exploration of the security groups that were created.

Perform the following steps from the GCE web console.

On the main GCE console, click on the left hand navigation panel select the Networking. Select "FirewallRules" and check the rules that were created as part of the infrastructure provisioning. For example,notice how the ssh-external firewall rule only allows SSH traffic inbound. That can be furtherrestricted to a specific network or host if required.

[email protected] 52 www.redhat.com

Page 57: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.10.7. Explore the Load Balancers

As mentioned earlier in the document several Load Balancers have been created. The purpose of thissection is to encourage exploration of the Load Balancers that were created.

Perform the following steps from the GCE web console.

On the main GCE console, click on the left hand navigation panel select the Networking → Load Balancing.Select the master-https-lb-map load balancer and note the Frontend and how it is configured for port443. That is for the OpenShift web console traffic. On the same tab, notice that the master-https-lb-certis offloaded at the load balancer level and therefore not terminated on the masters. On the "BackendServices", there should be three master instances running with a Status of Healthy. Next check themaster-network-lb-pool load balancer see that it contains the three masters that make up the and the"Backend Services" from the master-https-lb-map load balancer. Further details of the configurationcan be viewed by exploring the Ansible playbooks to see exactly what was configured.

4.10.8. Explore the GCE Network

As mentioned earlier in the document, a network was created especially for OCP. The purpose of thissection is to encourage exploration of the network that was created.

Perform the following steps from the GCE web console.

On the main GCE console, click on the left hand navigation panel select the Networking → Networks. Selectthe ocp-network recently created and explore the Subnetworks, Firewall Rules, and Routes. More detailcan be looked at with the configuration by exploring the Ansible playbooks to see exactly what wasconfigured.

4.11. Persistent VolumesPersistent volumes (pv) are OpenShift objects that allow for storage to be defined and then claimed bypods to allow for data persistence. The most common persistent volume source on GCE is GoogleCompute Storage or GCS. GCS volumes can only be mounted or claimed by one pod at a time. Mountingof persistent volumes is done by using a persistent volume claim (pvc). This claim will mount thepersistent storage to a specific directory within a pod. This directory is referred to as the mountPath.

www.redhat.com 53 [email protected]

Page 58: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.11.1. Creating a Persistent Volumes

Login to the first OpenShift master to define the persistent volume.

$ oc project persistent$ vi pv.yamlapiVersion: "v1"kind: "PersistentVolume"metadata:  name: "pv0001"spec:  capacity:  storage: "5Gi"  accessModes:  - "ReadWriteOnce"  gcePersistentDisk:  fsType: "ext4"  pdName: "pd-disk-1"$ oc create -f pv.yamlpersistentvolume "persistent" created$ oc get pvNAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGEpersistent 10Gi RWO Available 47s

4.11.2. Creating a Persistent Volumes Claim

The persitent volume claim will change the pod from using EmptyDir non-persistent storage to storagebacked by an GCS volume. To claim space from the persistent volume a database server will be used todemostrate a persistent volume claim.

[email protected] 54 www.redhat.com

Page 59: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

$ oc new-app --docker-image registry.access.redhat.com/openshift3/mysql-55-rhel7--name=db -e 'MYSQL_USER=chmurphy,MYSQL_PASSWORD=password,MYSQL_DATABASE=persistent'

... ommitted ...

$ oc get podsNAME READY STATUS RESTARTS AGEdb-1-dwa7o 1/1 Running 0 5m

$ oc describe pod db-1-dwa7o

... ommitted ...

Volumes:  db-volume-1:  Type: EmptyDir (a temporary directory that shares a pod's lifetime)  Medium:

... ommitted ...

$ oc volume dc/db --add --overwrite --name=db-volume-1 --type=persistentVolumeClaim--claim-size=10Gipersistentvolumeclaims/pvc-ic0mudeploymentconfigs/db

$ oc get pvcNAME STATUS VOLUME CAPACITY ACCESSMODES AGEpvc-ic0mu Bound persistent 10Gi RWO 4s

$ oc get podsNAME READY STATUS RESTARTS AGEdb-2-0srls 1/1 Running 0 23s

$ oc describe pod db-2-0srls

.... ommitted ....

Volumes:  db-volume-1:  Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the samenamespace)  ClaimName: pvc-ic0mu  ReadOnly: false

.... ommitted ....

www.redhat.com 55 [email protected]

Page 60: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

The above has created a database pod with a persistent volume claim named database and hasattached the claim to the previously EmptyDir volume.

4.12. Testing FailureIn this section, reactions to failure are explored. After a successful install and some of the smoke testsnoted above have been completed, failure testing is executed.

4.12.1. Generate a Master Outage

Perform the following steps from the GCE web console and the OpenShift public URL.

On the main GCE console, click on the left hand navigation panel select the Instances. Locate yourrunning ocp-master-02.gce.sysdeseng.com instance, select it, right click and change the state to stopped.

Ensure the console can still be accessed by opening a browser and accessingmaster.gce.sysdeseng.com. At this point, the cluster is in a degraded state because only 2/3 masternodes are running, but complete funcionality remains.

4.12.2. Observe the Behavior of ETCD with a Failed Master Node

SSH into the first master node (ocp-master-01.gce.sysdeseng.com). Using the output of the commandhostname issue the etcdctl command to confirm that the cluster is healthy.

$ ssh [email protected]$ sudo -i

# hostnameip-10-20-1-106.ec2.internal# ETCD_ENDPOINT=$(hostname -f) && etcdctl -C https://$ETCD_ENDPOINT:2379 --ca-file/etc/etcd/ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key cluster-healthfailed to check the health of member 82c895b7b0de4330 on https://10.20.2.251:2379: Gethttps://10.20.1.251:2379/health: dial tcp 10.20.1.251:2379: i/o timeoutmember 82c895b7b0de4330 is unreachable: [https://10.20.1.251:2379] are all unreachablemember c8e7ac98bb93fe8c is healthy: got healthy result from https://10.20.3.74:2379member f7bbfc4285f239ba is healthy: got healthy result from https://10.20.1.106:2379cluster is healthy

Notice how one member of the ETCD cluster is now unreachable. Restart ocp-master-02.gce.sysdeseng.com by following the same steps in the GCE web console as noted above.

[email protected] 56 www.redhat.com

Page 61: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

4.12.3. Generate an Infrastruture Node outage

This section shows what to expect when an infrastructure node fails or is brought down intentionally.

Confirm Application Accessibility

Perform the following steps from the browser on a local workstation.

Before bringing down an infrastructure node, check behavior and ensure things are working asexpected. The goal of testing an infrastructure node outage is to see how the OpenShift routers andregistries behave. Confirm the simple application deployed from before is still functional. If it is not,deploy a new version. Access the application to confirm connectivity. As a reminder, to find therequired information the ensure the application is still running, list the projects, change to the projectthat the application is deployed in, get the status of the application which including the URL and accessthe application via that URL.

$ oc get projectsNAME DISPLAY NAME STATUSopenshift Activeopenshift-infra Activettester Activetest-app1 Activedefault Activemanagement-infra Active

$ oc project test-app1Now using project "test-app1" on server "https://master.gce.sysdeseng.com".

$ oc statusIn project test-app1 on server https://master.gce.sysdeseng.com

http://php-test-app1.apps.sysdeseng.com to pod port 8080-tcp (svc/php-prod)  dc/php-prod deploys istag/php-prod:latest <-  bc/php-prod builds https://github.com/openshift/cakephp-ex.git with openshift/php:5.6  deployment #1 deployed 27 minutes ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

Open a browser and ensure the application is still accessible.

Confirm Registry Funtionality

This section is another step to take before initiating the outage of the infrastructure node to ensure thatthe registry is functioning properly. The goal is to push to the OpenShift registry.

www.redhat.com 57 [email protected]

Page 62: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Perform the following steps from a CLI on a local workstation and ensure that the occlient has been configured.

A token is needed so that the Docker registry can be logged into.

# oc whoami -tfeAeAgL139uFFF_72bcJlboTv7gi_bo373kf1byaAT8

Pull a new docker image for the purposes of test pushing.

# docker pull fedora/apache# docker images

Capture the registry endpoint. The svc/docker-registry shows the endpoint.

# oc statusIn project default on server https://master.gce.sysdeseng.com

svc/docker-registry - 172.30.237.147:5000  dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.3.0.32  deployment #2 deployed 51 minutes ago - 2 pods  deployment #1 deployed 53 minutes ago

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

svc/router - 172.30.144.227 ports 80, 443, 1936  dc/router deploys docker.io/openshift3/ose-haproxy-router:v3.3.0.32  deployment #1 deployed 55 minutes ago - 2 pods

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Tag the docker image with the endpoint from the previous step.

# docker tag docker.io/fedora/apache 172.30.110.31:5000/openshift/prodapache

Check the images and ensure the newly tagged image is available.

# docker images

[email protected] 58 www.redhat.com

Page 63: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Issue a Docker login.

# docker login -u [email protected] -e [email protected] -p_7yJcnXfeRtAbJVEaQwPwXreEhlV56TkgDwZ6UEUDWw 172.30.110.31:5000

# oadm policy add-role-to-user admin [email protected] -n openshift# oadm policy add-role-to-user system:registry [email protected]# oadm policy add-role-to-user system:image-builder [email protected]

Push the image to the OpenShift registry now.

# docker push 172.30.110.222:5000/openshift/prodapacheThe push refers to a repository [172.30.110.222:5000/openshift/prodapache]389eb3601e55: Layer already existsc56d9d429ea9: Layer already exists2a6c028a91ff: Layer already exists11284f349477: Layer already exists6c992a0e818a: Layer already existslatest: digest: sha256:ca66f8321243cce9c5dbab48dc79b7c31cf0e1d7e94984de61d37dfdac4e381fsize: 6186

www.redhat.com 59 [email protected]

Page 64: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Get Location of Router and Registry.

Perform the following steps from the CLI of a local workstation.

Change to the default OpenShift project and check the router and registry pod locations.

$ oc project defaultNow using project "default" on server "https://master.gce.sysdeseng.com".

$ oc get podsNAME READY STATUS RESTARTS AGEdocker-registry-2-8b7c6 1/1 Running 1 21hdocker-registry-2-drhgz 1/1 Running 0 7hrouter-1-6y5td 1/1 Running 1 21hrouter-1-rlcwj 1/1 Running 1 21h

$ oc describe pod docker-registry-2-8b7c6 | grep -i nodeNode: ocp-infra-01.gce.sysdeseng.com/10.132.0.4$ oc describe pod docker-registry-2-gmvdr | grep -i nodeNode: ocp-infra-02.gce.sysdeseng.com/10.132.0.5$ oc describe pod router-1-6y5td | grep -i nodeNode: ocp-infra-01.gce.sysdeseng.com/10.132.0.4$ oc describe pod router-1-rlcwj | grep -i nodeNode: ocp-infra-02.gce.sysdeseng.com/10.132.0.5

Initiate the Failure and Confirm Functionality

Perform the following steps from the GCE web console and a browser.

On the main GCE console, click on the left hand navigation panel select the Instances. Locate yourrunning infra01 instance, select it, right click and change the state to stopped. Wait a minute or two forthe registry and pod to migrate over to infra01. Check the registry locations and confirm that they areon the same node.

$ oc describe pod docker-registry-2-8b7c6 | grep -i nodeNode: ocp-infra-01.gce.sysdeseng.com/10.132.0.4$ oc describe pod docker-registry-2-drhgz | grep -i nodeNode: ocp-infra-02.gce.sysdeseng.com/10.132.0.5

Follow the proceedures above to ensure a Docker image can still be pushed to the registry now thatinfra01 is down.

[email protected] 60 www.redhat.com

Page 65: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

5. ConclusionRed Hat solutions involving the OpenShift PaaS are created to deliver a production-ready foundationthat simplifies the deployment process, shares the latest best practices, and provides a stable highlyavailable environment on which to run your production applications.

A successful deployment consists of the following:

• Properly configured servers that meet the OpenShift pre-requisites.

• Properly configured DNS.

• Access to a host with Ansible support.

• Proper subscription entitlement on all servers.

Once these requirements are met, the highly available OpenShift environment can be deployed.

For any questions or concerns, please email [email protected] and ensure to visit the RedHat Reference Architecture page to find about all of our Red Hat solution offerings.

www.redhat.com 61 [email protected]

Page 66: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Appendix A: Revision HistoryRevision Release Date Author(s)

1.0 Tuesday September 8, 2015 Scott Collier

1.1 Friday Apris 4, 2016 Chris Murphy / Peter Schiffer

1.2 Tuesday September 16, 2016 Chris Murphy / Peter Schiffer

1.3 Monday Oct 10, 2016 Chris Murphy / Peter Schiffer

1.4 Friday Oct 28, 2016 Chris Murphy / Peter Schiffer

1.5 Tuesday Nov 2, 2016 Chris Murphy / Peter Schiffer

1.6 Saturday Nov 5, 2016 Chris Murphy / Peter Schiffer

1.7 Saturday Nov 21, 2016 Chris Murphy / Peter Schiffer

PDF generated by Asciidoctor PDF

Reference Architecture Theme version 1.0

[email protected] 62 www.redhat.com

Page 67: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation

Appendix B: Contributors1. Jason DeTiberus, content provider

2. Matthew Farrellee, content provider

3. Tim St. Clair, content provider

4. Rob Rati, content provider

5. Aaron Weitekamp, review

6. James Shubin, review

7. Dirk Herrmann, review

8. John Ruemker, review

9. Andrew Beekhof, review

10. David Critch, review

11. Eric Jacobs, review

12. Scott Collier, review

www.redhat.com 63 [email protected]

Page 68: Deploying Red Hat OpenShift Container Platform 3 on Google ...€¦ · 21/11/2016  · Platform 3 environment on Google Cloud Engine (GCE). The image below provides a high-level representation