Upload
simone-morellato
View
333
Download
0
Embed Size (px)
Citation preview
© 2016 VMware Inc. All rights reserved.
Patrick Daigle
Technical Marketing Architect, vSphere Integrated Containers
vSphere IntegratedContainersCreating a consistent operational model for containers
Agenda
Introduction to containers
Container Run Time Options from VMware
Introducing vSphere Integrated Containers
vSphere Integrated Containers Technical Details
End-user workflow
Conclusion & Questions
Introduction to containers
Linux Kernel 4.2
Linux “Container” Host
Containers 101
Linux distro = THE Linux kernel + management, and user-space tools
– i.e. libraries, additional software, docs, etc
• A container image specifies a base set of tools/libs/sw
– Dockerfile
Linux Kernel 4.2
Management & User-space Tools (Libraries, Additional Software, & Docs)
OS config/Application SW
App
Process 1
App
Process 2App
Process nContainer 1
Standard Linux Host
Docker
Engine
Photon OS
Tools,
Libs, SW
Dockerfile = Image Config
Photon OS
Container n
Tools,
Libs, SW
Dichotomy: Dev/Ops have different “cares”
Developers Like Ops Needs
Portable Fast Light Secure Network Data
PersistenceConsistent
Management
Ability to move
Dev Test Prod
Rapid start
times
&
Control
Minimal
Configuraiton
and footprint
Meet
security
standards
Hook into
existing
network
Access to the
“state” of the
app
Single
pane of
glass
Developers and Ops Divide
ContainersIN DEVELOPMENT
Containers IN PRODUCTION
Container run time options from VMware
Container Technology & VMware
Photon OS
VMware Linux Distribution
Container Host
Optimized for vSphere, AWS, GCE
vSphere Integrated Containers
Virtual Container Host
Docker API Endpoint
Container Visibility & Operations
Photon Platform
Container Optimized Cloud Platform
Multi-Tenant / High Scale
Kubernetes as a Service
New Feature New Platform
Introducing vSphere Integrated Containers
Where in the stack?
Physical Infrastructure
Virtualized Infrastructure
IaaS
SW Development
Platform Services
Docker Endpoint
Virtual Container Host
Net|Sec|Ops Visibility
https://github.com/vmware/vic
Container
Registry
CONTAINER MANAGEMENT PORTAL
vm vm vm
vm vm vm
vm vm vm
vm vm vm
vm vm vm
vRealize Suite
vSphere Integrated Containers
VC
H 1
Container API Endpoint
VIC Engine
VC
H 2
Container API Endpoint
VIC Engine
C-VM C-VM C-VM
C-VM C-VM C-VM
C-VM C-VM C-VM
C-VM C-VM C-VM
vSphere
Linux
CCC Linux
Kernel
Linux
KernelLinux
Kernel
Virtual Container Host
Introducing vSphere Integrated Containers
Container Engine
DockerAPI
Container Engine
Container Host
vSphere
The Value Proposition of vSphere Integrated Containers
• Run in the same vSphere environment as VMs
• Virtual Container Hosts backed by a resource pool
• Resources can be dynamically added/removed
• NSX micro segmentation and networking
• vCenter operations work with containers like they do with VMs (DRS, Host Evac, etc)
• Ecosystem tools available for VMs can be used with containers (vRops)
CCC
Photon OS
Kernel
Photon
OS
Kernel
Photon OS
Kernel
Virtual Container Host
Container Engine
DockerAPI
Resource Pool
50 Ghz, 512GB
Resource Pool
75 Ghz, 768GB
Live DemovSphere Integrated Containers
vSphere Integrated Containers Technical Details
VCH
Container Endpoint
vSphere Integrated Containers – Operating Model
ESXi ESXi ESXi ESXi ESXi
VSAN
vCenter Server
NSX
C-VM
Container VM
nginx process
Linux Kernel
vic-machine-linux createdocker run –d –p 80:80 nginx
ESXi ESXiESXi
vSphere Cluster
C-VM
VM VM
VM VM
The Virtual Container Host (VCH)
• It’s a collection of vSphere compute resources wrapped in a vApp construct
• Upon deployment, the VCH includes a “Docker API end-point VM”
• This is the endpoint that users use to communicate via Docker CLI
• The VCH vApp will include all containerVMs instantiated via docker run
• vSphere Integrated Container has multi-tenancy built in
• A single ESXi host can have n VCHs on it each of which with different resources
VIC Engine Requirements
• Download VIC Engine on the Client Machine
– Enter below command from your terminal
• wget https://registry.corp.local:9443/vic_1.1.1.tar.gz
• tar -zxvf vic_1.1.1.tar.gz
• DRS has to be enabled on the vSphere Cluster
• vNetwork Distributed Switch is required
– Create L2 (Logical Switch) isolated dPG for Containers-VCH communication. A unique, isolated network is needed for each VCH (with NSX, VXLAN can be used for isolation).
– Create Logical Switch for containers external connectivity with Internet connectivity. DHCP could be used (e.g. with NSX Edge). The External Network could be shared between multiple VCHs
• Open TCP 2377 Outgoing in each ESXi Host
– Use vic-machine update firewall command
• Example:
./vic-machine-linux update firewall --target vcsa-01a.corp.local --user
[email protected] --compute-resource RegionA01-COMP01 --allow
Installation of the vSphere Container Host (VCH)
• Run vic-machine command from Client Machine to create VCH vApp in the vSphere cluster.
– Example:./vic-machine-linux create --target vcsa-01a.corp.local --user [email protected] --compute-resource
RegionA01-COMP01 --image-store RegionA01-ISCSI01-COMP01 --volume-store RegionA01-ISCSI01-COMP01:default --public-
network VM-RegionA01-vDS-COMP --public-network-ip 192.168.100.22/24 --public-network-gateway 192.168.100.1 --dns-
server 192.168.110.10 --container-network VM-RegionA01-vDS-COMP:routable --bridge-network Bridge01-RegionA01-vDS-
COMP --name virtual-container-host --registry-ca=/etc/docker/certs.d/registry.corp.local/ca.crt --no-tls
• Add the option --container-network if you want to connect containers to a network other
than the bridged Network (recommended)
• All Components to be consumed later by Docker Client have to be identified during VCH installation
• Command result
– Installer completed successfully
VIC Engine packaging
VIC Engine comes with a set of assets that can “inject” VCHs into a vSphere setup
• vic-machine is the CLI that creates Virtual Container Hosts • Available for Linux | Windows | Mac
• appliance.iso is the ISO each VCH end-point VMs will boot from • VCH end-point VMs are stateless and only boot from an ISO• This greatly simplifies management and upgrades
• bootstrap.iso is the ISO used as the “just enough kernel” for Container-VMs• On top of this kernel VIC “layers” the docker image you want to run• This blog has good info on C-VMs persistency (http://blog.think-v.com/?p=4302)
VCH Network nomenclature
VCH (vApp)
VCH
(Docker Endpoint VM)
BridgeNetworkDocker Client
Network
vSphere ManagementNetwork
PublicNetwork
• Docker Client Management Network: the network used to interact with the VCH VM via a Docker client.
• vSphere Management Network: the network used by the VCH VM and the ContainerVMs to interact with vSphere.
• Public Network: the equivalent of eth0 on a Docker host. This is the network used to expose services to the public world (via –p)
• Bridge Network(s): the equivalent of Docker0 on a Docker Host.
• Container Network(s): these are networks containers can attach to directly for inbound/outbound communications to by-pass the VCH VM
ContainerNetwork(s)
VIC Networking Option 1 – Default Docker behavior
Virtual Container Host (vSphere Cluster)
VCH VM
Container VM 1
Container VM 2
Public NetworkInternal
Isolated Network
172.16.0.1
172.16.0.2
10.0.1.2 (DHCP)
• Containers access through VCH VM
• Default if no Container PG is specified while creating the container• Typical docker run –p Use Case
VIC Networking Option 2 – Connecting containers directly to external networks
• Containers could be attached to Container Networks Directly to avoid Single Point of Failure
– --container-network option has to be used during VCH Installation
• DHCP can be used to assign Container IP address.
• A Container could be accessed directly through its IP Address without NAT
Container Host (Resource Cluster)
VCH VM
Container VM 1
Container VM 2
External Network 1
DHCP
DHCP
10.0.1.2 (DHCP)
Container Network 1
Container Network 2
• Typical docker run -–
network Use Case
• Container networks
displayed with docker
network ls
• Look up DHCP IP docker
inspect
Storage components
• Image Store (--image-store)
• The only storage related mandatory parameter
• The datastore where VCHs and Docker images get saved
• Docker images gets saved in a folder named “VIC” under the VCH folder.
• The --image-store option supports specifying a folder (eg datastore_name/folder_name)
• If you do so the VIC folder gets moved inside the folder_name and the VCH folder remains in the
root
• The --image-store option supports being shared among different VCHs
• When using the same folder_name different namespaces get created to avoid racing conditions
• Volume Store (--volume-store)
• The --volume-store option supports specifying a folder (eg datastore_name/folder_name)
• It requires a label to be specified for later reference by the docker cli
• The --volume-store option supports being shared among different VCHs
• Best practice: specify a folder name
VCH Admin Portal & Logs
Private Container Registry
user management & access control
role-based access control
AD/LDAP integration
policy based image replication
audit and logs
RESTful API
lightweight & easy deployment
bandwidth efficiency
content protection
open-source under Apache 2 license
Container Management Portal
Container Provisioning from Templates
• Different registries can be used with Project Admiral
• Docker compose import / export support is available
• Containers can be provisioned from images or templates
• vSphere Integrated Containers (VIC) provisioning also supported
End-User Workflow
Basic End-User Commands
• Set up DOCKER_HOST environment variable
– export DOCKER_HOST=192.168.100.22:2375
• Run a docker image from DockerHub (Internet)
– docker run busybox date
Basic End-User Commands (cont’d)
• Run a docker image from the private registry
– docker run registry.corp.local/myproject/busybox:1.26 date
• Login to private registry
– docker login registry.corp.local
Basic End-User Commands (cont’d)
• Creating a docker volume (for data persistence)
– docker volume create --opt Capacity=10GB --name registrycache
• Volume gets created asa VMDK
Advanced End-User Commands
• Self-provision a docker daemon
– docker run –v registrycache:/var/lib/docker –-net external –d
vmware/dinv:latest –-tls –r registry.corp.local
• Find IP Address of newly created docker daemon
– docker inspect <DOCKER_ID> | grep IPAddress
Advanced End-User Commands – Registry (cont’d)
• Tag an image
– docker –H 192.168.100.128:2375 tag 00f017a8c2a6
registry.corp.local/myproject/busybox:1.26
• Push image to private registry
– docker –H 192.168.100.128:2375 push
registry.corp.local/myproject/busybox:1.26
Advanced End-User Commands – Registry (cont’d)
• Note role-based access controls in Private Registry
– testguest user is authorized to pull only
– testdev user is authorized to push & pull
– User membership/role fully configurable per project
– Authentication against AD is available but out-of-scope for the POC
Conclusion
vSphere Integrated Containers: SDDC Integrations
• We bring the following Capabilities to Container Management:
Storage and Availability
Compute
Network and Security
• Auto Load Balancing across multiple Container Hosts
• Scale and manage Docker Containers without Service Disruption
• Portable and persistent Storage for Docker Containers
• Virtualized Networking and security (NSX) for Container-based Applications
• Micro-Segmentation - isolating traffic flow from one container to another
Intelligent
Operations
• Balance Workloads across multiple Container Hosts using existing
Management Tools
Call to Action
Try it outHOL-1730-USE-1 - vSphere Integrated Containers
Getting Started with vSphere Integrated Containershttps://vmware.github.io/vic/assets/files/html/vic_installation/index.html
Visit us on Githubhttps://vmware.github.io/vic-product/
https://github.com/vmware/vic
https://github.com/vmware/harbor
https://github.com/vmware/admiral
CONFIDENTIAL 39
Questions?
@pdaigle@cloudnativeapps#vmwcna
ca.linkedin.com/in/patdaigle
blogs.vmware.com/cloudnativevmware.github.io/
Engage!